00:00:00.000 Started by upstream project "autotest-per-patch" build number 132485 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.059 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.112 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.243 > git --version # 'git version 2.39.2' 00:00:00.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.281 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:03:23.371 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:03:23.385 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:03:23.398 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:03:23.398 > git config core.sparsecheckout # timeout=10 00:03:23.409 > git read-tree -mu HEAD # timeout=10 00:03:23.424 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:03:23.447 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:03:23.447 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:03:23.547 [Pipeline] Start of Pipeline 00:03:23.562 [Pipeline] library 00:03:23.564 Loading library shm_lib@master 00:03:33.759 Library shm_lib@master is cached. Copying from home. 00:03:33.827 [Pipeline] node 00:03:33.929 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest 00:03:33.937 [Pipeline] { 00:03:33.952 [Pipeline] catchError 00:03:33.955 [Pipeline] { 00:03:33.968 [Pipeline] wrap 00:03:34.007 [Pipeline] { 00:03:34.023 [Pipeline] stage 00:03:34.025 [Pipeline] { (Prologue) 00:03:34.041 [Pipeline] echo 00:03:34.042 Node: VM-host-SM0 00:03:34.047 [Pipeline] cleanWs 00:03:34.056 [WS-CLEANUP] Deleting project workspace... 00:03:34.056 [WS-CLEANUP] Deferred wipeout is used... 00:03:34.062 [WS-CLEANUP] done 00:03:34.315 [Pipeline] setCustomBuildProperty 00:03:34.396 [Pipeline] httpRequest 00:03:35.103 [Pipeline] echo 00:03:35.104 Sorcerer 10.211.164.20 is alive 00:03:35.111 [Pipeline] retry 00:03:35.112 [Pipeline] { 00:03:35.121 [Pipeline] httpRequest 00:03:35.125 HttpMethod: GET 00:03:35.125 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:35.125 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:35.127 Response Code: HTTP/1.1 200 OK 00:03:35.127 Success: Status code 200 is in the accepted range: 200,404 00:03:35.127 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:35.273 [Pipeline] } 00:03:35.284 [Pipeline] // retry 00:03:35.291 [Pipeline] sh 00:03:35.571 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:03:35.586 [Pipeline] httpRequest 00:03:35.930 [Pipeline] echo 00:03:35.932 Sorcerer 10.211.164.20 is alive 00:03:35.940 [Pipeline] retry 00:03:35.941 [Pipeline] { 00:03:35.951 [Pipeline] httpRequest 00:03:35.954 HttpMethod: GET 00:03:35.955 URL: http://10.211.164.20/packages/spdk_1e9cebf1906bf9e4023a8547d868ff77a95aae6d.tar.gz 00:03:35.955 Sending request to url: http://10.211.164.20/packages/spdk_1e9cebf1906bf9e4023a8547d868ff77a95aae6d.tar.gz 00:03:35.956 Response Code: HTTP/1.1 404 Not Found 00:03:35.957 Success: Status code 404 is in the accepted range: 200,404 00:03:35.957 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_1e9cebf1906bf9e4023a8547d868ff77a95aae6d.tar.gz 00:03:35.959 [Pipeline] } 00:03:35.969 [Pipeline] // retry 00:03:35.973 [Pipeline] sh 00:03:36.246 + rm -f spdk_1e9cebf1906bf9e4023a8547d868ff77a95aae6d.tar.gz 00:03:36.258 [Pipeline] retry 00:03:36.259 [Pipeline] { 00:03:36.281 [Pipeline] checkout 00:03:36.288 The recommended git tool is: NONE 00:03:36.312 using credential 00000000-0000-0000-0000-000000000002 00:03:36.314 Wiping out workspace first. 00:03:36.323 Cloning the remote Git repository 00:03:36.326 Honoring refspec on initial clone 00:03:36.331 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:03:36.332 > git init /var/jenkins/workspace/nvme-vg-autotest/spdk # timeout=10 00:03:36.345 Using reference repository: /var/ci_repos/spdk_multi 00:03:36.345 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:03:36.345 > git --version # timeout=10 00:03:36.348 > git --version # 'git version 2.25.1' 00:03:36.348 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:36.351 Setting http proxy: proxy-dmz.intel.com:911 00:03:36.351 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/63/25463/2 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:04:15.938 Avoid second fetch 00:04:15.955 Checking out Revision 1e9cebf1906bf9e4023a8547d868ff77a95aae6d (FETCH_HEAD) 00:04:16.236 Commit message: "util: multi-level fd_group nesting" 00:04:16.245 First time build. Skipping changelog. 00:04:15.917 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:04:15.921 > git config --add remote.origin.fetch refs/changes/63/25463/2 # timeout=10 00:04:15.924 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:04:15.940 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:04:15.949 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:04:15.958 > git config core.sparsecheckout # timeout=10 00:04:15.962 > git checkout -f 1e9cebf1906bf9e4023a8547d868ff77a95aae6d # timeout=10 00:04:16.238 > git rev-list --no-walk eb055bb93252b0fc9e854d82315bd3a3991825f9 # timeout=10 00:04:16.250 > git remote # timeout=10 00:04:16.254 > git submodule init # timeout=10 00:04:16.313 > git submodule sync # timeout=10 00:04:16.366 > git config --get remote.origin.url # timeout=10 00:04:16.375 > git submodule init # timeout=10 00:04:16.425 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:04:16.429 > git config --get submodule.dpdk.url # timeout=10 00:04:16.434 > git remote # timeout=10 00:04:16.438 > git config --get remote.origin.url # timeout=10 00:04:16.442 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:04:16.445 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:04:16.449 > git remote # timeout=10 00:04:16.454 > git config --get remote.origin.url # timeout=10 00:04:16.457 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:04:16.461 > git config --get submodule.isa-l.url # timeout=10 00:04:16.465 > git remote # timeout=10 00:04:16.469 > git config --get remote.origin.url # timeout=10 00:04:16.473 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:04:16.476 > git config --get submodule.ocf.url # timeout=10 00:04:16.479 > git remote # timeout=10 00:04:16.483 > git config --get remote.origin.url # timeout=10 00:04:16.486 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:04:16.489 > git config --get submodule.libvfio-user.url # timeout=10 00:04:16.492 > git remote # timeout=10 00:04:16.497 > git config --get remote.origin.url # timeout=10 00:04:16.501 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:04:16.504 > git config --get submodule.xnvme.url # timeout=10 00:04:16.508 > git remote # timeout=10 00:04:16.513 > git config --get remote.origin.url # timeout=10 00:04:16.516 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:04:16.519 > git config --get submodule.isa-l-crypto.url # timeout=10 00:04:16.523 > git remote # timeout=10 00:04:16.528 > git config --get remote.origin.url # timeout=10 00:04:16.531 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:04:16.537 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:16.537 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:16.537 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:16.537 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:16.537 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:16.537 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:16.537 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:16.540 Setting http proxy: proxy-dmz.intel.com:911 00:04:16.541 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:04:16.541 Setting http proxy: proxy-dmz.intel.com:911 00:04:16.541 Setting http proxy: proxy-dmz.intel.com:911 00:04:16.541 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:04:16.541 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:04:16.541 Setting http proxy: proxy-dmz.intel.com:911 00:04:16.541 Setting http proxy: proxy-dmz.intel.com:911 00:04:16.541 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:04:16.541 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:04:16.541 Setting http proxy: proxy-dmz.intel.com:911 00:04:16.541 Setting http proxy: proxy-dmz.intel.com:911 00:04:16.541 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:04:16.541 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:04:43.499 [Pipeline] dir 00:04:43.499 Running in /var/jenkins/workspace/nvme-vg-autotest/spdk 00:04:43.500 [Pipeline] { 00:04:43.510 [Pipeline] sh 00:04:43.793 ++ nproc 00:04:43.793 + threads=88 00:04:43.793 + git repack -a -d --threads=88 00:04:50.357 + git submodule foreach git repack -a -d --threads=88 00:04:50.357 Entering 'dpdk' 00:04:53.643 Entering 'intel-ipsec-mb' 00:04:54.210 Entering 'isa-l' 00:04:54.469 Entering 'isa-l-crypto' 00:04:54.469 Entering 'libvfio-user' 00:04:54.729 Entering 'ocf' 00:04:55.296 Entering 'xnvme' 00:04:55.296 + find .git -type f -name alternates -print -delete 00:04:55.296 .git/objects/info/alternates 00:04:55.296 .git/modules/ocf/objects/info/alternates 00:04:55.296 .git/modules/dpdk/objects/info/alternates 00:04:55.296 .git/modules/xnvme/objects/info/alternates 00:04:55.296 .git/modules/libvfio-user/objects/info/alternates 00:04:55.296 .git/modules/intel-ipsec-mb/objects/info/alternates 00:04:55.296 .git/modules/isa-l/objects/info/alternates 00:04:55.296 .git/modules/isa-l-crypto/objects/info/alternates 00:04:55.305 [Pipeline] } 00:04:55.321 [Pipeline] // dir 00:04:55.326 [Pipeline] } 00:04:55.341 [Pipeline] // retry 00:04:55.349 [Pipeline] sh 00:04:55.627 + hash pigz 00:04:55.627 + tar -czf spdk_1e9cebf1906bf9e4023a8547d868ff77a95aae6d.tar.gz spdk 00:05:10.545 [Pipeline] retry 00:05:10.547 [Pipeline] { 00:05:10.561 [Pipeline] httpRequest 00:05:10.568 HttpMethod: PUT 00:05:10.568 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_1e9cebf1906bf9e4023a8547d868ff77a95aae6d.tar.gz 00:05:10.569 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_1e9cebf1906bf9e4023a8547d868ff77a95aae6d.tar.gz 00:05:14.576 Response Code: HTTP/1.1 200 OK 00:05:14.587 Success: Status code 200 is in the accepted range: 200 00:05:14.589 [Pipeline] } 00:05:14.605 [Pipeline] // retry 00:05:14.613 [Pipeline] echo 00:05:14.615 00:05:14.615 Locking 00:05:14.615 Waited 1s for lock 00:05:14.615 File already exists: /storage/packages/spdk_1e9cebf1906bf9e4023a8547d868ff77a95aae6d.tar.gz 00:05:14.615 00:05:14.620 [Pipeline] sh 00:05:14.900 + git -C spdk log --oneline -n5 00:05:14.901 1e9cebf19 util: multi-level fd_group nesting 00:05:14.901 09301ca15 util: keep track of nested child fd_groups 00:05:14.901 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:05:14.901 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:05:14.901 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:05:14.920 [Pipeline] writeFile 00:05:14.938 [Pipeline] sh 00:05:15.222 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:15.233 [Pipeline] sh 00:05:15.512 + cat autorun-spdk.conf 00:05:15.512 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:15.512 SPDK_TEST_NVME=1 00:05:15.512 SPDK_TEST_FTL=1 00:05:15.512 SPDK_TEST_ISAL=1 00:05:15.512 SPDK_RUN_ASAN=1 00:05:15.512 SPDK_RUN_UBSAN=1 00:05:15.512 SPDK_TEST_XNVME=1 00:05:15.512 SPDK_TEST_NVME_FDP=1 00:05:15.512 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:15.519 RUN_NIGHTLY=0 00:05:15.521 [Pipeline] } 00:05:15.532 [Pipeline] // stage 00:05:15.555 [Pipeline] stage 00:05:15.558 [Pipeline] { (Run VM) 00:05:15.611 [Pipeline] sh 00:05:15.896 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:15.896 + echo 'Start stage prepare_nvme.sh' 00:05:15.896 Start stage prepare_nvme.sh 00:05:15.896 + [[ -n 5 ]] 00:05:15.896 + disk_prefix=ex5 00:05:15.896 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:05:15.896 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:05:15.896 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:05:15.896 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:15.896 ++ SPDK_TEST_NVME=1 00:05:15.896 ++ SPDK_TEST_FTL=1 00:05:15.896 ++ SPDK_TEST_ISAL=1 00:05:15.896 ++ SPDK_RUN_ASAN=1 00:05:15.896 ++ SPDK_RUN_UBSAN=1 00:05:15.896 ++ SPDK_TEST_XNVME=1 00:05:15.896 ++ SPDK_TEST_NVME_FDP=1 00:05:15.896 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:15.896 ++ RUN_NIGHTLY=0 00:05:15.896 + cd /var/jenkins/workspace/nvme-vg-autotest 00:05:15.896 + nvme_files=() 00:05:15.896 + declare -A nvme_files 00:05:15.896 + backend_dir=/var/lib/libvirt/images/backends 00:05:15.896 + nvme_files['nvme.img']=5G 00:05:15.896 + nvme_files['nvme-cmb.img']=5G 00:05:15.896 + nvme_files['nvme-multi0.img']=4G 00:05:15.896 + nvme_files['nvme-multi1.img']=4G 00:05:15.896 + nvme_files['nvme-multi2.img']=4G 00:05:15.896 + nvme_files['nvme-openstack.img']=8G 00:05:15.896 + nvme_files['nvme-zns.img']=5G 00:05:15.896 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:15.896 + (( SPDK_TEST_FTL == 1 )) 00:05:15.896 + nvme_files["nvme-ftl.img"]=6G 00:05:15.896 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:15.896 + nvme_files["nvme-fdp.img"]=1G 00:05:15.896 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:15.896 + for nvme in "${!nvme_files[@]}" 00:05:15.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:05:15.896 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:15.896 + for nvme in "${!nvme_files[@]}" 00:05:15.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:05:15.896 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:05:15.896 + for nvme in "${!nvme_files[@]}" 00:05:15.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:05:15.896 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:15.896 + for nvme in "${!nvme_files[@]}" 00:05:15.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:05:16.153 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:16.153 + for nvme in "${!nvme_files[@]}" 00:05:16.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:05:16.153 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:16.153 + for nvme in "${!nvme_files[@]}" 00:05:16.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:05:16.153 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:16.153 + for nvme in "${!nvme_files[@]}" 00:05:16.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:05:16.153 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:16.153 + for nvme in "${!nvme_files[@]}" 00:05:16.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:05:16.153 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:05:16.153 + for nvme in "${!nvme_files[@]}" 00:05:16.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:05:16.412 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:16.412 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:05:16.412 + echo 'End stage prepare_nvme.sh' 00:05:16.412 End stage prepare_nvme.sh 00:05:16.424 [Pipeline] sh 00:05:16.708 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:16.708 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:05:16.708 00:05:16.708 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:05:16.708 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:05:16.709 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:05:16.709 HELP=0 00:05:16.709 DRY_RUN=0 00:05:16.709 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:05:16.709 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:05:16.709 NVME_AUTO_CREATE=0 00:05:16.709 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:05:16.709 NVME_CMB=,,,, 00:05:16.709 NVME_PMR=,,,, 00:05:16.709 NVME_ZNS=,,,, 00:05:16.709 NVME_MS=true,,,, 00:05:16.709 NVME_FDP=,,,on, 00:05:16.709 SPDK_VAGRANT_DISTRO=fedora39 00:05:16.709 SPDK_VAGRANT_VMCPU=10 00:05:16.709 SPDK_VAGRANT_VMRAM=12288 00:05:16.709 SPDK_VAGRANT_PROVIDER=libvirt 00:05:16.709 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:16.709 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:16.709 SPDK_OPENSTACK_NETWORK=0 00:05:16.709 VAGRANT_PACKAGE_BOX=0 00:05:16.709 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:16.709 FORCE_DISTRO=true 00:05:16.709 VAGRANT_BOX_VERSION= 00:05:16.709 EXTRA_VAGRANTFILES= 00:05:16.709 NIC_MODEL=e1000 00:05:16.709 00:05:16.709 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:05:16.709 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:05:19.993 Bringing machine 'default' up with 'libvirt' provider... 00:05:20.560 ==> default: Creating image (snapshot of base box volume). 00:05:20.560 ==> default: Creating domain with the following settings... 00:05:20.560 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732529474_4dd83a4d8781cec15690 00:05:20.560 ==> default: -- Domain type: kvm 00:05:20.560 ==> default: -- Cpus: 10 00:05:20.560 ==> default: -- Feature: acpi 00:05:20.560 ==> default: -- Feature: apic 00:05:20.560 ==> default: -- Feature: pae 00:05:20.560 ==> default: -- Memory: 12288M 00:05:20.560 ==> default: -- Memory Backing: hugepages: 00:05:20.560 ==> default: -- Management MAC: 00:05:20.560 ==> default: -- Loader: 00:05:20.560 ==> default: -- Nvram: 00:05:20.560 ==> default: -- Base box: spdk/fedora39 00:05:20.560 ==> default: -- Storage pool: default 00:05:20.560 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732529474_4dd83a4d8781cec15690.img (20G) 00:05:20.560 ==> default: -- Volume Cache: default 00:05:20.560 ==> default: -- Kernel: 00:05:20.560 ==> default: -- Initrd: 00:05:20.560 ==> default: -- Graphics Type: vnc 00:05:20.560 ==> default: -- Graphics Port: -1 00:05:20.560 ==> default: -- Graphics IP: 127.0.0.1 00:05:20.560 ==> default: -- Graphics Password: Not defined 00:05:20.560 ==> default: -- Video Type: cirrus 00:05:20.560 ==> default: -- Video VRAM: 9216 00:05:20.560 ==> default: -- Sound Type: 00:05:20.560 ==> default: -- Keymap: en-us 00:05:20.560 ==> default: -- TPM Path: 00:05:20.560 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:20.560 ==> default: -- Command line args: 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:20.560 ==> default: -> value=-drive, 00:05:20.560 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:20.560 ==> default: -> value=-drive, 00:05:20.560 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:05:20.560 ==> default: -> value=-drive, 00:05:20.560 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:20.560 ==> default: -> value=-drive, 00:05:20.560 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:20.560 ==> default: -> value=-drive, 00:05:20.560 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:05:20.560 ==> default: -> value=-drive, 00:05:20.560 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:05:20.560 ==> default: -> value=-device, 00:05:20.560 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:20.819 ==> default: Creating shared folders metadata... 00:05:20.819 ==> default: Starting domain. 00:05:22.720 ==> default: Waiting for domain to get an IP address... 00:05:40.800 ==> default: Waiting for SSH to become available... 00:05:40.800 ==> default: Configuring and enabling network interfaces... 00:05:43.333 default: SSH address: 192.168.121.129:22 00:05:43.333 default: SSH username: vagrant 00:05:43.333 default: SSH auth method: private key 00:05:45.868 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:53.982 ==> default: Mounting SSHFS shared folder... 00:05:55.360 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:55.360 ==> default: Checking Mount.. 00:05:56.735 ==> default: Folder Successfully Mounted! 00:05:56.735 ==> default: Running provisioner: file... 00:05:57.334 default: ~/.gitconfig => .gitconfig 00:05:57.899 00:05:57.899 SUCCESS! 00:05:57.899 00:05:57.899 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:05:57.899 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:57.899 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:05:57.899 00:05:57.907 [Pipeline] } 00:05:57.918 [Pipeline] // stage 00:05:57.925 [Pipeline] dir 00:05:57.926 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:05:57.927 [Pipeline] { 00:05:57.936 [Pipeline] catchError 00:05:57.937 [Pipeline] { 00:05:57.947 [Pipeline] sh 00:05:58.224 + vagrant ssh-config --host vagrant 00:05:58.224 + sed -ne /^Host/,$p 00:05:58.224 + tee ssh_conf 00:06:01.528 Host vagrant 00:06:01.528 HostName 192.168.121.129 00:06:01.528 User vagrant 00:06:01.528 Port 22 00:06:01.528 UserKnownHostsFile /dev/null 00:06:01.528 StrictHostKeyChecking no 00:06:01.528 PasswordAuthentication no 00:06:01.528 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:01.528 IdentitiesOnly yes 00:06:01.528 LogLevel FATAL 00:06:01.528 ForwardAgent yes 00:06:01.528 ForwardX11 yes 00:06:01.528 00:06:01.543 [Pipeline] withEnv 00:06:01.545 [Pipeline] { 00:06:01.559 [Pipeline] sh 00:06:01.839 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:01.839 source /etc/os-release 00:06:01.839 [[ -e /image.version ]] && img=$(< /image.version) 00:06:01.839 # Minimal, systemd-like check. 00:06:01.839 if [[ -e /.dockerenv ]]; then 00:06:01.839 # Clear garbage from the node's name: 00:06:01.839 # agt-er_autotest_547-896 -> autotest_547-896 00:06:01.839 # $HOSTNAME is the actual container id 00:06:01.839 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:01.839 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:01.839 # We can assume this is a mount from a host where container is running, 00:06:01.839 # so fetch its hostname to easily identify the target swarm worker. 00:06:01.839 container="$(< /etc/hostname) ($agent)" 00:06:01.839 else 00:06:01.839 # Fallback 00:06:01.839 container=$agent 00:06:01.839 fi 00:06:01.839 fi 00:06:01.839 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:01.839 00:06:02.108 [Pipeline] } 00:06:02.126 [Pipeline] // withEnv 00:06:02.141 [Pipeline] setCustomBuildProperty 00:06:02.158 [Pipeline] stage 00:06:02.160 [Pipeline] { (Tests) 00:06:02.178 [Pipeline] sh 00:06:02.458 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:02.732 [Pipeline] sh 00:06:03.011 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:03.287 [Pipeline] timeout 00:06:03.287 Timeout set to expire in 50 min 00:06:03.289 [Pipeline] { 00:06:03.305 [Pipeline] sh 00:06:03.584 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:04.150 HEAD is now at 1e9cebf19 util: multi-level fd_group nesting 00:06:04.162 [Pipeline] sh 00:06:04.440 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:04.712 [Pipeline] sh 00:06:04.992 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:05.266 [Pipeline] sh 00:06:05.547 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:06:05.807 ++ readlink -f spdk_repo 00:06:05.808 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:05.808 + [[ -n /home/vagrant/spdk_repo ]] 00:06:05.808 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:05.808 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:05.808 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:05.808 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:05.808 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:05.808 + [[ nvme-vg-autotest == pkgdep-* ]] 00:06:05.808 + cd /home/vagrant/spdk_repo 00:06:05.808 + source /etc/os-release 00:06:05.808 ++ NAME='Fedora Linux' 00:06:05.808 ++ VERSION='39 (Cloud Edition)' 00:06:05.808 ++ ID=fedora 00:06:05.808 ++ VERSION_ID=39 00:06:05.808 ++ VERSION_CODENAME= 00:06:05.808 ++ PLATFORM_ID=platform:f39 00:06:05.808 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:05.808 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:05.808 ++ LOGO=fedora-logo-icon 00:06:05.808 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:05.808 ++ HOME_URL=https://fedoraproject.org/ 00:06:05.808 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:05.808 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:05.808 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:05.808 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:05.808 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:05.808 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:05.808 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:05.808 ++ SUPPORT_END=2024-11-12 00:06:05.808 ++ VARIANT='Cloud Edition' 00:06:05.808 ++ VARIANT_ID=cloud 00:06:05.808 + uname -a 00:06:05.808 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:05.808 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:06.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.332 Hugepages 00:06:06.332 node hugesize free / total 00:06:06.332 node0 1048576kB 0 / 0 00:06:06.332 node0 2048kB 0 / 0 00:06:06.332 00:06:06.332 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:06.332 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:06.332 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:06.591 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:06:06.591 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:06.591 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:06.591 + rm -f /tmp/spdk-ld-path 00:06:06.591 + source autorun-spdk.conf 00:06:06.591 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:06.591 ++ SPDK_TEST_NVME=1 00:06:06.591 ++ SPDK_TEST_FTL=1 00:06:06.591 ++ SPDK_TEST_ISAL=1 00:06:06.591 ++ SPDK_RUN_ASAN=1 00:06:06.591 ++ SPDK_RUN_UBSAN=1 00:06:06.591 ++ SPDK_TEST_XNVME=1 00:06:06.591 ++ SPDK_TEST_NVME_FDP=1 00:06:06.591 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:06.591 ++ RUN_NIGHTLY=0 00:06:06.591 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:06.591 + [[ -n '' ]] 00:06:06.591 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:06.591 + for M in /var/spdk/build-*-manifest.txt 00:06:06.591 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:06.591 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:06.591 + for M in /var/spdk/build-*-manifest.txt 00:06:06.591 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:06.591 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:06.591 + for M in /var/spdk/build-*-manifest.txt 00:06:06.591 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:06.591 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:06.591 ++ uname 00:06:06.591 + [[ Linux == \L\i\n\u\x ]] 00:06:06.591 + sudo dmesg -T 00:06:06.591 + sudo dmesg --clear 00:06:06.591 + dmesg_pid=5293 00:06:06.591 + [[ Fedora Linux == FreeBSD ]] 00:06:06.591 + sudo dmesg -Tw 00:06:06.591 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:06.592 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:06.592 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:06.592 + [[ -x /usr/src/fio-static/fio ]] 00:06:06.592 + export FIO_BIN=/usr/src/fio-static/fio 00:06:06.592 + FIO_BIN=/usr/src/fio-static/fio 00:06:06.592 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:06.592 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:06.592 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:06.592 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:06.592 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:06.592 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:06.592 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:06.592 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:06.592 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:06.592 10:12:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:06.592 10:12:00 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:06.592 10:12:00 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:06.592 10:12:00 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:06:06.870 10:12:00 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:06:06.870 10:12:00 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:06:06.870 10:12:00 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:06:06.870 10:12:00 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:06.870 10:12:00 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:06:06.870 10:12:00 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:06:06.870 10:12:00 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:06.870 10:12:00 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:06:06.870 10:12:00 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:06.870 10:12:00 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:06.870 10:12:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:06.870 10:12:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.870 10:12:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:06.870 10:12:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:06.870 10:12:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.870 10:12:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.870 10:12:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.870 10:12:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.870 10:12:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.870 10:12:00 -- paths/export.sh@5 -- $ export PATH 00:06:06.870 10:12:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.870 10:12:00 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:06.870 10:12:00 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:06.871 10:12:00 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732529520.XXXXXX 00:06:06.871 10:12:00 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732529520.adm7aU 00:06:06.871 10:12:00 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:06.871 10:12:00 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:06.871 10:12:00 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:06.871 10:12:00 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:06.871 10:12:00 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:06.871 10:12:00 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:06.871 10:12:00 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:06.871 10:12:00 -- common/autotest_common.sh@10 -- $ set +x 00:06:06.871 10:12:01 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:06:06.871 10:12:01 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:06.871 10:12:01 -- pm/common@17 -- $ local monitor 00:06:06.871 10:12:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:06.871 10:12:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:06.871 10:12:01 -- pm/common@25 -- $ sleep 1 00:06:06.871 10:12:01 -- pm/common@21 -- $ date +%s 00:06:06.871 10:12:01 -- pm/common@21 -- $ date +%s 00:06:06.871 10:12:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732529521 00:06:06.871 10:12:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732529521 00:06:06.871 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732529521_collect-cpu-load.pm.log 00:06:06.871 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732529521_collect-vmstat.pm.log 00:06:07.807 10:12:02 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:07.807 10:12:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:07.807 10:12:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:07.807 10:12:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:07.807 10:12:02 -- spdk/autobuild.sh@16 -- $ date -u 00:06:07.807 Mon Nov 25 10:12:02 AM UTC 2024 00:06:07.807 10:12:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:07.807 v25.01-pre-221-g1e9cebf19 00:06:07.807 10:12:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:07.807 10:12:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:07.807 10:12:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:07.807 10:12:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:07.807 10:12:02 -- common/autotest_common.sh@10 -- $ set +x 00:06:07.807 ************************************ 00:06:07.807 START TEST asan 00:06:07.807 ************************************ 00:06:07.807 using asan 00:06:07.807 10:12:02 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:06:07.807 00:06:07.807 real 0m0.000s 00:06:07.807 user 0m0.000s 00:06:07.807 sys 0m0.000s 00:06:07.807 10:12:02 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:07.807 10:12:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:07.807 ************************************ 00:06:07.807 END TEST asan 00:06:07.807 ************************************ 00:06:07.807 10:12:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:07.807 10:12:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:07.807 10:12:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:07.807 10:12:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:07.807 10:12:02 -- common/autotest_common.sh@10 -- $ set +x 00:06:07.807 ************************************ 00:06:07.807 START TEST ubsan 00:06:07.807 ************************************ 00:06:07.807 using ubsan 00:06:07.807 10:12:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:07.807 00:06:07.807 real 0m0.000s 00:06:07.807 user 0m0.000s 00:06:07.807 sys 0m0.000s 00:06:07.807 10:12:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:07.807 10:12:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:07.807 ************************************ 00:06:07.807 END TEST ubsan 00:06:07.807 ************************************ 00:06:08.066 10:12:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:08.066 10:12:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:08.066 10:12:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:08.066 10:12:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:08.066 10:12:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:08.066 10:12:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:08.066 10:12:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:08.066 10:12:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:08.066 10:12:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:06:08.066 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:08.066 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:08.634 Using 'verbs' RDMA provider 00:06:24.447 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:36.705 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:36.705 Creating mk/config.mk...done. 00:06:36.705 Creating mk/cc.flags.mk...done. 00:06:36.705 Type 'make' to build. 00:06:36.705 10:12:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:36.705 10:12:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:36.705 10:12:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:36.705 10:12:30 -- common/autotest_common.sh@10 -- $ set +x 00:06:36.705 ************************************ 00:06:36.705 START TEST make 00:06:36.705 ************************************ 00:06:36.705 10:12:30 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:36.705 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:36.705 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:36.705 meson setup builddir \ 00:06:36.705 -Dwith-libaio=enabled \ 00:06:36.705 -Dwith-liburing=enabled \ 00:06:36.705 -Dwith-libvfn=disabled \ 00:06:36.705 -Dwith-spdk=disabled \ 00:06:36.705 -Dexamples=false \ 00:06:36.705 -Dtests=false \ 00:06:36.705 -Dtools=false && \ 00:06:36.705 meson compile -C builddir && \ 00:06:36.705 cd -) 00:06:36.705 make[1]: Nothing to be done for 'all'. 00:06:39.234 The Meson build system 00:06:39.234 Version: 1.5.0 00:06:39.234 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:39.234 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:39.234 Build type: native build 00:06:39.234 Project name: xnvme 00:06:39.234 Project version: 0.7.5 00:06:39.234 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:39.234 C linker for the host machine: cc ld.bfd 2.40-14 00:06:39.234 Host machine cpu family: x86_64 00:06:39.234 Host machine cpu: x86_64 00:06:39.234 Message: host_machine.system: linux 00:06:39.234 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:39.234 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:39.234 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:39.234 Run-time dependency threads found: YES 00:06:39.234 Has header "setupapi.h" : NO 00:06:39.234 Has header "linux/blkzoned.h" : YES 00:06:39.234 Has header "linux/blkzoned.h" : YES (cached) 00:06:39.234 Has header "libaio.h" : YES 00:06:39.234 Library aio found: YES 00:06:39.234 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:39.234 Run-time dependency liburing found: YES 2.2 00:06:39.234 Dependency libvfn skipped: feature with-libvfn disabled 00:06:39.234 Found CMake: /usr/bin/cmake (3.27.7) 00:06:39.234 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:06:39.234 Subproject spdk : skipped: feature with-spdk disabled 00:06:39.234 Run-time dependency appleframeworks found: NO (tried framework) 00:06:39.234 Run-time dependency appleframeworks found: NO (tried framework) 00:06:39.234 Library rt found: YES 00:06:39.234 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:39.234 Configuring xnvme_config.h using configuration 00:06:39.235 Configuring xnvme.spec using configuration 00:06:39.235 Run-time dependency bash-completion found: YES 2.11 00:06:39.235 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:39.235 Program cp found: YES (/usr/bin/cp) 00:06:39.235 Build targets in project: 3 00:06:39.235 00:06:39.235 xnvme 0.7.5 00:06:39.235 00:06:39.235 Subprojects 00:06:39.235 spdk : NO Feature 'with-spdk' disabled 00:06:39.235 00:06:39.235 User defined options 00:06:39.235 examples : false 00:06:39.235 tests : false 00:06:39.235 tools : false 00:06:39.235 with-libaio : enabled 00:06:39.235 with-liburing: enabled 00:06:39.235 with-libvfn : disabled 00:06:39.235 with-spdk : disabled 00:06:39.235 00:06:39.235 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:39.493 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:39.493 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:06:39.493 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:06:39.493 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:06:39.493 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:06:39.493 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:06:39.493 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:06:39.493 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:06:39.493 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:06:39.493 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:06:39.493 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:06:39.751 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:06:39.751 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:06:39.751 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:06:39.751 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:06:39.751 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:06:39.751 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:06:39.751 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:06:39.751 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:06:39.751 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:06:39.751 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:06:39.751 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:06:39.751 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:06:39.751 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:06:39.751 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:06:39.751 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:06:39.751 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:06:39.751 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:06:40.009 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:06:40.009 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:06:40.009 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:06:40.009 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:06:40.009 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:06:40.009 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:06:40.009 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:06:40.009 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:06:40.009 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:06:40.009 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:06:40.009 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:06:40.009 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:06:40.009 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:06:40.009 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:06:40.009 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:06:40.009 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:06:40.009 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:06:40.009 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:06:40.009 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:06:40.009 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:06:40.009 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:06:40.009 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:06:40.009 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:06:40.009 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:06:40.009 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:06:40.009 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:06:40.009 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:06:40.009 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:06:40.267 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:06:40.267 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:06:40.267 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:06:40.268 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:06:40.268 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:06:40.268 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:06:40.268 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:06:40.268 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:06:40.268 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:06:40.268 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:06:40.268 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:06:40.268 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:06:40.268 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:06:40.268 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:06:40.268 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:06:40.268 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:06:40.526 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:06:40.526 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:06:40.783 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:06:40.783 [75/76] Linking static target lib/libxnvme.a 00:06:40.783 [76/76] Linking target lib/libxnvme.so.0.7.5 00:06:40.783 INFO: autodetecting backend as ninja 00:06:40.783 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:41.041 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:06:51.010 The Meson build system 00:06:51.010 Version: 1.5.0 00:06:51.010 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:51.010 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:51.010 Build type: native build 00:06:51.010 Program cat found: YES (/usr/bin/cat) 00:06:51.010 Project name: DPDK 00:06:51.010 Project version: 24.03.0 00:06:51.010 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:51.010 C linker for the host machine: cc ld.bfd 2.40-14 00:06:51.010 Host machine cpu family: x86_64 00:06:51.011 Host machine cpu: x86_64 00:06:51.011 Message: ## Building in Developer Mode ## 00:06:51.011 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:51.011 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:51.011 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:51.011 Program python3 found: YES (/usr/bin/python3) 00:06:51.011 Program cat found: YES (/usr/bin/cat) 00:06:51.011 Compiler for C supports arguments -march=native: YES 00:06:51.011 Checking for size of "void *" : 8 00:06:51.011 Checking for size of "void *" : 8 (cached) 00:06:51.011 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:51.011 Library m found: YES 00:06:51.011 Library numa found: YES 00:06:51.011 Has header "numaif.h" : YES 00:06:51.011 Library fdt found: NO 00:06:51.011 Library execinfo found: NO 00:06:51.011 Has header "execinfo.h" : YES 00:06:51.011 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:51.011 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:51.011 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:51.011 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:51.011 Run-time dependency openssl found: YES 3.1.1 00:06:51.011 Run-time dependency libpcap found: YES 1.10.4 00:06:51.011 Has header "pcap.h" with dependency libpcap: YES 00:06:51.011 Compiler for C supports arguments -Wcast-qual: YES 00:06:51.011 Compiler for C supports arguments -Wdeprecated: YES 00:06:51.011 Compiler for C supports arguments -Wformat: YES 00:06:51.011 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:51.011 Compiler for C supports arguments -Wformat-security: NO 00:06:51.011 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:51.011 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:51.011 Compiler for C supports arguments -Wnested-externs: YES 00:06:51.011 Compiler for C supports arguments -Wold-style-definition: YES 00:06:51.011 Compiler for C supports arguments -Wpointer-arith: YES 00:06:51.011 Compiler for C supports arguments -Wsign-compare: YES 00:06:51.011 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:51.011 Compiler for C supports arguments -Wundef: YES 00:06:51.011 Compiler for C supports arguments -Wwrite-strings: YES 00:06:51.011 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:51.011 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:51.011 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:51.011 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:51.011 Program objdump found: YES (/usr/bin/objdump) 00:06:51.011 Compiler for C supports arguments -mavx512f: YES 00:06:51.011 Checking if "AVX512 checking" compiles: YES 00:06:51.011 Fetching value of define "__SSE4_2__" : 1 00:06:51.011 Fetching value of define "__AES__" : 1 00:06:51.011 Fetching value of define "__AVX__" : 1 00:06:51.011 Fetching value of define "__AVX2__" : 1 00:06:51.011 Fetching value of define "__AVX512BW__" : (undefined) 00:06:51.011 Fetching value of define "__AVX512CD__" : (undefined) 00:06:51.011 Fetching value of define "__AVX512DQ__" : (undefined) 00:06:51.011 Fetching value of define "__AVX512F__" : (undefined) 00:06:51.011 Fetching value of define "__AVX512VL__" : (undefined) 00:06:51.011 Fetching value of define "__PCLMUL__" : 1 00:06:51.011 Fetching value of define "__RDRND__" : 1 00:06:51.011 Fetching value of define "__RDSEED__" : 1 00:06:51.011 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:51.011 Fetching value of define "__znver1__" : (undefined) 00:06:51.011 Fetching value of define "__znver2__" : (undefined) 00:06:51.011 Fetching value of define "__znver3__" : (undefined) 00:06:51.011 Fetching value of define "__znver4__" : (undefined) 00:06:51.011 Library asan found: YES 00:06:51.011 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:51.011 Message: lib/log: Defining dependency "log" 00:06:51.011 Message: lib/kvargs: Defining dependency "kvargs" 00:06:51.011 Message: lib/telemetry: Defining dependency "telemetry" 00:06:51.011 Library rt found: YES 00:06:51.011 Checking for function "getentropy" : NO 00:06:51.011 Message: lib/eal: Defining dependency "eal" 00:06:51.011 Message: lib/ring: Defining dependency "ring" 00:06:51.011 Message: lib/rcu: Defining dependency "rcu" 00:06:51.011 Message: lib/mempool: Defining dependency "mempool" 00:06:51.011 Message: lib/mbuf: Defining dependency "mbuf" 00:06:51.011 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:51.011 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:06:51.011 Compiler for C supports arguments -mpclmul: YES 00:06:51.011 Compiler for C supports arguments -maes: YES 00:06:51.011 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:51.011 Compiler for C supports arguments -mavx512bw: YES 00:06:51.011 Compiler for C supports arguments -mavx512dq: YES 00:06:51.011 Compiler for C supports arguments -mavx512vl: YES 00:06:51.011 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:51.011 Compiler for C supports arguments -mavx2: YES 00:06:51.011 Compiler for C supports arguments -mavx: YES 00:06:51.011 Message: lib/net: Defining dependency "net" 00:06:51.011 Message: lib/meter: Defining dependency "meter" 00:06:51.011 Message: lib/ethdev: Defining dependency "ethdev" 00:06:51.011 Message: lib/pci: Defining dependency "pci" 00:06:51.011 Message: lib/cmdline: Defining dependency "cmdline" 00:06:51.011 Message: lib/hash: Defining dependency "hash" 00:06:51.011 Message: lib/timer: Defining dependency "timer" 00:06:51.011 Message: lib/compressdev: Defining dependency "compressdev" 00:06:51.011 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:51.011 Message: lib/dmadev: Defining dependency "dmadev" 00:06:51.011 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:51.011 Message: lib/power: Defining dependency "power" 00:06:51.011 Message: lib/reorder: Defining dependency "reorder" 00:06:51.011 Message: lib/security: Defining dependency "security" 00:06:51.011 Has header "linux/userfaultfd.h" : YES 00:06:51.011 Has header "linux/vduse.h" : YES 00:06:51.011 Message: lib/vhost: Defining dependency "vhost" 00:06:51.011 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:51.011 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:51.011 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:51.011 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:51.011 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:51.011 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:51.011 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:51.011 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:51.011 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:51.011 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:51.011 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:51.011 Configuring doxy-api-html.conf using configuration 00:06:51.011 Configuring doxy-api-man.conf using configuration 00:06:51.012 Program mandb found: YES (/usr/bin/mandb) 00:06:51.012 Program sphinx-build found: NO 00:06:51.012 Configuring rte_build_config.h using configuration 00:06:51.012 Message: 00:06:51.012 ================= 00:06:51.012 Applications Enabled 00:06:51.012 ================= 00:06:51.012 00:06:51.012 apps: 00:06:51.012 00:06:51.012 00:06:51.012 Message: 00:06:51.012 ================= 00:06:51.012 Libraries Enabled 00:06:51.012 ================= 00:06:51.012 00:06:51.012 libs: 00:06:51.012 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:51.012 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:51.012 cryptodev, dmadev, power, reorder, security, vhost, 00:06:51.012 00:06:51.012 Message: 00:06:51.012 =============== 00:06:51.012 Drivers Enabled 00:06:51.012 =============== 00:06:51.012 00:06:51.012 common: 00:06:51.012 00:06:51.012 bus: 00:06:51.012 pci, vdev, 00:06:51.012 mempool: 00:06:51.012 ring, 00:06:51.012 dma: 00:06:51.012 00:06:51.012 net: 00:06:51.012 00:06:51.012 crypto: 00:06:51.012 00:06:51.012 compress: 00:06:51.012 00:06:51.012 vdpa: 00:06:51.012 00:06:51.012 00:06:51.012 Message: 00:06:51.012 ================= 00:06:51.012 Content Skipped 00:06:51.012 ================= 00:06:51.012 00:06:51.012 apps: 00:06:51.012 dumpcap: explicitly disabled via build config 00:06:51.012 graph: explicitly disabled via build config 00:06:51.012 pdump: explicitly disabled via build config 00:06:51.012 proc-info: explicitly disabled via build config 00:06:51.012 test-acl: explicitly disabled via build config 00:06:51.012 test-bbdev: explicitly disabled via build config 00:06:51.012 test-cmdline: explicitly disabled via build config 00:06:51.012 test-compress-perf: explicitly disabled via build config 00:06:51.012 test-crypto-perf: explicitly disabled via build config 00:06:51.012 test-dma-perf: explicitly disabled via build config 00:06:51.012 test-eventdev: explicitly disabled via build config 00:06:51.012 test-fib: explicitly disabled via build config 00:06:51.012 test-flow-perf: explicitly disabled via build config 00:06:51.012 test-gpudev: explicitly disabled via build config 00:06:51.012 test-mldev: explicitly disabled via build config 00:06:51.012 test-pipeline: explicitly disabled via build config 00:06:51.012 test-pmd: explicitly disabled via build config 00:06:51.012 test-regex: explicitly disabled via build config 00:06:51.012 test-sad: explicitly disabled via build config 00:06:51.012 test-security-perf: explicitly disabled via build config 00:06:51.012 00:06:51.012 libs: 00:06:51.012 argparse: explicitly disabled via build config 00:06:51.012 metrics: explicitly disabled via build config 00:06:51.012 acl: explicitly disabled via build config 00:06:51.012 bbdev: explicitly disabled via build config 00:06:51.012 bitratestats: explicitly disabled via build config 00:06:51.012 bpf: explicitly disabled via build config 00:06:51.012 cfgfile: explicitly disabled via build config 00:06:51.012 distributor: explicitly disabled via build config 00:06:51.012 efd: explicitly disabled via build config 00:06:51.012 eventdev: explicitly disabled via build config 00:06:51.012 dispatcher: explicitly disabled via build config 00:06:51.012 gpudev: explicitly disabled via build config 00:06:51.012 gro: explicitly disabled via build config 00:06:51.012 gso: explicitly disabled via build config 00:06:51.012 ip_frag: explicitly disabled via build config 00:06:51.012 jobstats: explicitly disabled via build config 00:06:51.012 latencystats: explicitly disabled via build config 00:06:51.012 lpm: explicitly disabled via build config 00:06:51.012 member: explicitly disabled via build config 00:06:51.012 pcapng: explicitly disabled via build config 00:06:51.012 rawdev: explicitly disabled via build config 00:06:51.012 regexdev: explicitly disabled via build config 00:06:51.012 mldev: explicitly disabled via build config 00:06:51.012 rib: explicitly disabled via build config 00:06:51.012 sched: explicitly disabled via build config 00:06:51.012 stack: explicitly disabled via build config 00:06:51.012 ipsec: explicitly disabled via build config 00:06:51.012 pdcp: explicitly disabled via build config 00:06:51.012 fib: explicitly disabled via build config 00:06:51.012 port: explicitly disabled via build config 00:06:51.012 pdump: explicitly disabled via build config 00:06:51.012 table: explicitly disabled via build config 00:06:51.012 pipeline: explicitly disabled via build config 00:06:51.012 graph: explicitly disabled via build config 00:06:51.012 node: explicitly disabled via build config 00:06:51.012 00:06:51.012 drivers: 00:06:51.012 common/cpt: not in enabled drivers build config 00:06:51.012 common/dpaax: not in enabled drivers build config 00:06:51.012 common/iavf: not in enabled drivers build config 00:06:51.012 common/idpf: not in enabled drivers build config 00:06:51.012 common/ionic: not in enabled drivers build config 00:06:51.012 common/mvep: not in enabled drivers build config 00:06:51.012 common/octeontx: not in enabled drivers build config 00:06:51.012 bus/auxiliary: not in enabled drivers build config 00:06:51.012 bus/cdx: not in enabled drivers build config 00:06:51.012 bus/dpaa: not in enabled drivers build config 00:06:51.012 bus/fslmc: not in enabled drivers build config 00:06:51.012 bus/ifpga: not in enabled drivers build config 00:06:51.012 bus/platform: not in enabled drivers build config 00:06:51.012 bus/uacce: not in enabled drivers build config 00:06:51.012 bus/vmbus: not in enabled drivers build config 00:06:51.012 common/cnxk: not in enabled drivers build config 00:06:51.012 common/mlx5: not in enabled drivers build config 00:06:51.012 common/nfp: not in enabled drivers build config 00:06:51.012 common/nitrox: not in enabled drivers build config 00:06:51.012 common/qat: not in enabled drivers build config 00:06:51.012 common/sfc_efx: not in enabled drivers build config 00:06:51.012 mempool/bucket: not in enabled drivers build config 00:06:51.012 mempool/cnxk: not in enabled drivers build config 00:06:51.012 mempool/dpaa: not in enabled drivers build config 00:06:51.012 mempool/dpaa2: not in enabled drivers build config 00:06:51.012 mempool/octeontx: not in enabled drivers build config 00:06:51.012 mempool/stack: not in enabled drivers build config 00:06:51.012 dma/cnxk: not in enabled drivers build config 00:06:51.012 dma/dpaa: not in enabled drivers build config 00:06:51.012 dma/dpaa2: not in enabled drivers build config 00:06:51.012 dma/hisilicon: not in enabled drivers build config 00:06:51.012 dma/idxd: not in enabled drivers build config 00:06:51.012 dma/ioat: not in enabled drivers build config 00:06:51.012 dma/skeleton: not in enabled drivers build config 00:06:51.012 net/af_packet: not in enabled drivers build config 00:06:51.012 net/af_xdp: not in enabled drivers build config 00:06:51.012 net/ark: not in enabled drivers build config 00:06:51.012 net/atlantic: not in enabled drivers build config 00:06:51.012 net/avp: not in enabled drivers build config 00:06:51.012 net/axgbe: not in enabled drivers build config 00:06:51.012 net/bnx2x: not in enabled drivers build config 00:06:51.012 net/bnxt: not in enabled drivers build config 00:06:51.012 net/bonding: not in enabled drivers build config 00:06:51.012 net/cnxk: not in enabled drivers build config 00:06:51.012 net/cpfl: not in enabled drivers build config 00:06:51.012 net/cxgbe: not in enabled drivers build config 00:06:51.012 net/dpaa: not in enabled drivers build config 00:06:51.012 net/dpaa2: not in enabled drivers build config 00:06:51.012 net/e1000: not in enabled drivers build config 00:06:51.012 net/ena: not in enabled drivers build config 00:06:51.012 net/enetc: not in enabled drivers build config 00:06:51.012 net/enetfec: not in enabled drivers build config 00:06:51.012 net/enic: not in enabled drivers build config 00:06:51.012 net/failsafe: not in enabled drivers build config 00:06:51.012 net/fm10k: not in enabled drivers build config 00:06:51.012 net/gve: not in enabled drivers build config 00:06:51.012 net/hinic: not in enabled drivers build config 00:06:51.012 net/hns3: not in enabled drivers build config 00:06:51.012 net/i40e: not in enabled drivers build config 00:06:51.012 net/iavf: not in enabled drivers build config 00:06:51.012 net/ice: not in enabled drivers build config 00:06:51.012 net/idpf: not in enabled drivers build config 00:06:51.012 net/igc: not in enabled drivers build config 00:06:51.012 net/ionic: not in enabled drivers build config 00:06:51.012 net/ipn3ke: not in enabled drivers build config 00:06:51.012 net/ixgbe: not in enabled drivers build config 00:06:51.012 net/mana: not in enabled drivers build config 00:06:51.012 net/memif: not in enabled drivers build config 00:06:51.012 net/mlx4: not in enabled drivers build config 00:06:51.012 net/mlx5: not in enabled drivers build config 00:06:51.012 net/mvneta: not in enabled drivers build config 00:06:51.012 net/mvpp2: not in enabled drivers build config 00:06:51.012 net/netvsc: not in enabled drivers build config 00:06:51.012 net/nfb: not in enabled drivers build config 00:06:51.012 net/nfp: not in enabled drivers build config 00:06:51.012 net/ngbe: not in enabled drivers build config 00:06:51.012 net/null: not in enabled drivers build config 00:06:51.012 net/octeontx: not in enabled drivers build config 00:06:51.012 net/octeon_ep: not in enabled drivers build config 00:06:51.012 net/pcap: not in enabled drivers build config 00:06:51.012 net/pfe: not in enabled drivers build config 00:06:51.012 net/qede: not in enabled drivers build config 00:06:51.012 net/ring: not in enabled drivers build config 00:06:51.012 net/sfc: not in enabled drivers build config 00:06:51.012 net/softnic: not in enabled drivers build config 00:06:51.012 net/tap: not in enabled drivers build config 00:06:51.012 net/thunderx: not in enabled drivers build config 00:06:51.012 net/txgbe: not in enabled drivers build config 00:06:51.012 net/vdev_netvsc: not in enabled drivers build config 00:06:51.012 net/vhost: not in enabled drivers build config 00:06:51.012 net/virtio: not in enabled drivers build config 00:06:51.012 net/vmxnet3: not in enabled drivers build config 00:06:51.012 raw/*: missing internal dependency, "rawdev" 00:06:51.012 crypto/armv8: not in enabled drivers build config 00:06:51.012 crypto/bcmfs: not in enabled drivers build config 00:06:51.012 crypto/caam_jr: not in enabled drivers build config 00:06:51.012 crypto/ccp: not in enabled drivers build config 00:06:51.012 crypto/cnxk: not in enabled drivers build config 00:06:51.013 crypto/dpaa_sec: not in enabled drivers build config 00:06:51.013 crypto/dpaa2_sec: not in enabled drivers build config 00:06:51.013 crypto/ipsec_mb: not in enabled drivers build config 00:06:51.013 crypto/mlx5: not in enabled drivers build config 00:06:51.013 crypto/mvsam: not in enabled drivers build config 00:06:51.013 crypto/nitrox: not in enabled drivers build config 00:06:51.013 crypto/null: not in enabled drivers build config 00:06:51.013 crypto/octeontx: not in enabled drivers build config 00:06:51.013 crypto/openssl: not in enabled drivers build config 00:06:51.013 crypto/scheduler: not in enabled drivers build config 00:06:51.013 crypto/uadk: not in enabled drivers build config 00:06:51.013 crypto/virtio: not in enabled drivers build config 00:06:51.013 compress/isal: not in enabled drivers build config 00:06:51.013 compress/mlx5: not in enabled drivers build config 00:06:51.013 compress/nitrox: not in enabled drivers build config 00:06:51.013 compress/octeontx: not in enabled drivers build config 00:06:51.013 compress/zlib: not in enabled drivers build config 00:06:51.013 regex/*: missing internal dependency, "regexdev" 00:06:51.013 ml/*: missing internal dependency, "mldev" 00:06:51.013 vdpa/ifc: not in enabled drivers build config 00:06:51.013 vdpa/mlx5: not in enabled drivers build config 00:06:51.013 vdpa/nfp: not in enabled drivers build config 00:06:51.013 vdpa/sfc: not in enabled drivers build config 00:06:51.013 event/*: missing internal dependency, "eventdev" 00:06:51.013 baseband/*: missing internal dependency, "bbdev" 00:06:51.013 gpu/*: missing internal dependency, "gpudev" 00:06:51.013 00:06:51.013 00:06:51.013 Build targets in project: 85 00:06:51.013 00:06:51.013 DPDK 24.03.0 00:06:51.013 00:06:51.013 User defined options 00:06:51.013 buildtype : debug 00:06:51.013 default_library : shared 00:06:51.013 libdir : lib 00:06:51.013 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:51.013 b_sanitize : address 00:06:51.013 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:51.013 c_link_args : 00:06:51.013 cpu_instruction_set: native 00:06:51.013 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:51.013 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:51.013 enable_docs : false 00:06:51.013 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:51.013 enable_kmods : false 00:06:51.013 max_lcores : 128 00:06:51.013 tests : false 00:06:51.013 00:06:51.013 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:51.013 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:51.013 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:51.013 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:51.013 [3/268] Linking static target lib/librte_kvargs.a 00:06:51.013 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:51.013 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:51.013 [6/268] Linking static target lib/librte_log.a 00:06:51.271 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.533 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:51.533 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:51.533 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:51.533 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:51.791 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:51.791 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:51.791 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:51.791 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:51.791 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:51.791 [17/268] Linking static target lib/librte_telemetry.a 00:06:52.050 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:52.050 [19/268] Linking target lib/librte_log.so.24.1 00:06:52.050 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:52.308 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:52.567 [22/268] Linking target lib/librte_kvargs.so.24.1 00:06:52.826 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:52.826 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:52.826 [25/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:52.826 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:52.826 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:52.826 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:53.084 [29/268] Linking target lib/librte_telemetry.so.24.1 00:06:53.084 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:53.084 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:53.084 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:53.084 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:53.342 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:53.342 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:53.601 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:53.601 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:53.860 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:54.118 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:54.118 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:54.118 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:54.118 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:54.118 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:54.118 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:54.118 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:54.376 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:54.377 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:54.635 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:54.893 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:54.893 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:55.150 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:55.150 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:55.409 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:55.409 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:55.409 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:55.409 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:55.669 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:55.669 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:55.928 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:55.928 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:55.928 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:55.928 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:56.187 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:56.187 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:56.498 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:56.498 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:56.498 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:56.757 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:56.757 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:57.016 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:57.016 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:57.016 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:57.016 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:57.016 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:57.016 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:57.275 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:57.275 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:57.275 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:57.275 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:57.539 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:57.539 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:57.813 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:57.813 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:57.813 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:57.813 [85/268] Linking static target lib/librte_eal.a 00:06:58.071 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:58.071 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:58.071 [88/268] Linking static target lib/librte_ring.a 00:06:58.330 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:58.330 [90/268] Linking static target lib/librte_rcu.a 00:06:58.330 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:58.330 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:58.330 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:58.588 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:58.845 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:58.845 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:58.845 [97/268] Linking static target lib/librte_mempool.a 00:06:58.845 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:59.103 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:59.103 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:59.103 [101/268] Linking static target lib/librte_mbuf.a 00:06:59.668 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:59.668 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:59.668 [104/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:59.668 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:59.668 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:59.668 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:59.668 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:59.927 [109/268] Linking static target lib/librte_net.a 00:06:59.927 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:59.927 [111/268] Linking static target lib/librte_meter.a 00:07:00.185 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.185 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:00.443 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.443 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.443 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.443 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:00.443 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:00.443 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:01.009 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:01.268 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:01.268 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:01.527 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:01.527 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:01.527 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:01.527 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:01.786 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:01.786 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:01.786 [129/268] Linking static target lib/librte_pci.a 00:07:01.786 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:01.786 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:01.786 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:01.786 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:02.044 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:02.044 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:02.044 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:02.044 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.044 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:02.302 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:02.302 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:02.302 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:02.303 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:02.303 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:02.303 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:02.303 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:02.561 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:02.561 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:02.561 [148/268] Linking static target lib/librte_cmdline.a 00:07:02.820 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:02.820 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:03.078 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:03.078 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:03.337 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:03.337 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:03.337 [155/268] Linking static target lib/librte_timer.a 00:07:03.337 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:03.337 [157/268] Linking static target lib/librte_ethdev.a 00:07:03.904 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:03.904 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:03.904 [160/268] Linking static target lib/librte_compressdev.a 00:07:03.904 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.904 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:03.904 [163/268] Linking static target lib/librte_hash.a 00:07:03.904 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:03.904 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:04.162 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.162 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:04.421 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:04.421 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:04.421 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:04.421 [171/268] Linking static target lib/librte_dmadev.a 00:07:04.681 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:04.681 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:04.940 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.940 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:05.197 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:05.197 [177/268] Linking static target lib/librte_cryptodev.a 00:07:05.197 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.197 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:05.455 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.455 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:05.455 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:05.455 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:05.455 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:05.714 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:05.714 [186/268] Linking static target lib/librte_power.a 00:07:06.280 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:06.280 [188/268] Linking static target lib/librte_reorder.a 00:07:06.280 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:06.280 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:06.538 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:06.538 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:06.538 [193/268] Linking static target lib/librte_security.a 00:07:06.797 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:07.055 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.055 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.622 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.622 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:07.622 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:07.622 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:07.880 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:07.880 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:07.880 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.138 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:08.396 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:08.396 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:08.656 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:08.656 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:08.656 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:08.656 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:08.656 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:08.915 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:08.915 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:08.915 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:08.915 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:08.915 [216/268] Linking static target drivers/librte_bus_vdev.a 00:07:08.915 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:08.915 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:08.915 [219/268] Linking static target drivers/librte_bus_pci.a 00:07:09.174 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:09.174 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:09.433 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.433 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:09.433 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:09.433 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:09.433 [226/268] Linking static target drivers/librte_mempool_ring.a 00:07:09.692 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:10.259 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:10.259 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:10.259 [230/268] Linking target lib/librte_eal.so.24.1 00:07:10.518 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:10.518 [232/268] Linking target lib/librte_ring.so.24.1 00:07:10.518 [233/268] Linking target lib/librte_pci.so.24.1 00:07:10.518 [234/268] Linking target lib/librte_dmadev.so.24.1 00:07:10.518 [235/268] Linking target lib/librte_timer.so.24.1 00:07:10.518 [236/268] Linking target lib/librte_meter.so.24.1 00:07:10.518 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:10.777 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:10.777 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:10.777 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:10.777 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:10.777 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:10.777 [243/268] Linking target lib/librte_mempool.so.24.1 00:07:10.777 [244/268] Linking target lib/librte_rcu.so.24.1 00:07:10.777 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:10.777 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:10.777 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:11.037 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:11.037 [249/268] Linking target lib/librte_mbuf.so.24.1 00:07:11.037 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:11.037 [251/268] Linking target lib/librte_reorder.so.24.1 00:07:11.037 [252/268] Linking target lib/librte_compressdev.so.24.1 00:07:11.037 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:07:11.037 [254/268] Linking target lib/librte_net.so.24.1 00:07:11.296 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:11.296 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:11.296 [257/268] Linking target lib/librte_security.so.24.1 00:07:11.296 [258/268] Linking target lib/librte_cmdline.so.24.1 00:07:11.296 [259/268] Linking target lib/librte_hash.so.24.1 00:07:11.560 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:11.560 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:11.830 [262/268] Linking target lib/librte_ethdev.so.24.1 00:07:11.830 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:12.089 [264/268] Linking target lib/librte_power.so.24.1 00:07:15.376 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:15.376 [266/268] Linking static target lib/librte_vhost.a 00:07:16.750 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.750 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:16.750 INFO: autodetecting backend as ninja 00:07:16.750 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:38.684 CC lib/log/log.o 00:07:38.684 CC lib/log/log_flags.o 00:07:38.684 CC lib/log/log_deprecated.o 00:07:38.684 CC lib/ut_mock/mock.o 00:07:38.684 CC lib/ut/ut.o 00:07:38.684 LIB libspdk_ut.a 00:07:38.684 LIB libspdk_log.a 00:07:38.684 LIB libspdk_ut_mock.a 00:07:38.684 SO libspdk_ut.so.2.0 00:07:38.684 SO libspdk_ut_mock.so.6.0 00:07:38.684 SO libspdk_log.so.7.1 00:07:38.684 SYMLINK libspdk_ut.so 00:07:38.684 SYMLINK libspdk_ut_mock.so 00:07:38.684 SYMLINK libspdk_log.so 00:07:38.684 CC lib/ioat/ioat.o 00:07:38.684 CC lib/util/base64.o 00:07:38.684 CC lib/util/bit_array.o 00:07:38.684 CC lib/util/crc16.o 00:07:38.684 CC lib/util/cpuset.o 00:07:38.684 CC lib/util/crc32.o 00:07:38.684 CC lib/util/crc32c.o 00:07:38.684 CXX lib/trace_parser/trace.o 00:07:38.684 CC lib/dma/dma.o 00:07:38.684 CC lib/vfio_user/host/vfio_user_pci.o 00:07:38.684 CC lib/util/crc32_ieee.o 00:07:38.684 CC lib/util/crc64.o 00:07:38.684 CC lib/vfio_user/host/vfio_user.o 00:07:38.684 CC lib/util/dif.o 00:07:38.684 LIB libspdk_dma.a 00:07:38.684 SO libspdk_dma.so.5.0 00:07:38.684 CC lib/util/fd.o 00:07:38.684 CC lib/util/fd_group.o 00:07:38.684 SYMLINK libspdk_dma.so 00:07:38.684 CC lib/util/file.o 00:07:38.684 CC lib/util/hexlify.o 00:07:38.684 CC lib/util/iov.o 00:07:38.684 CC lib/util/math.o 00:07:38.684 LIB libspdk_ioat.a 00:07:38.684 LIB libspdk_vfio_user.a 00:07:38.684 SO libspdk_ioat.so.7.0 00:07:38.684 CC lib/util/net.o 00:07:38.684 SO libspdk_vfio_user.so.5.0 00:07:38.684 CC lib/util/pipe.o 00:07:38.684 SYMLINK libspdk_ioat.so 00:07:38.684 CC lib/util/strerror_tls.o 00:07:38.684 CC lib/util/string.o 00:07:38.684 CC lib/util/uuid.o 00:07:38.684 SYMLINK libspdk_vfio_user.so 00:07:38.684 CC lib/util/xor.o 00:07:38.684 CC lib/util/zipf.o 00:07:38.684 CC lib/util/md5.o 00:07:38.684 LIB libspdk_util.a 00:07:38.684 SO libspdk_util.so.10.1 00:07:38.684 LIB libspdk_trace_parser.a 00:07:38.943 SO libspdk_trace_parser.so.6.0 00:07:38.943 SYMLINK libspdk_util.so 00:07:38.943 SYMLINK libspdk_trace_parser.so 00:07:38.943 CC lib/conf/conf.o 00:07:38.943 CC lib/idxd/idxd_user.o 00:07:38.943 CC lib/idxd/idxd.o 00:07:38.943 CC lib/rdma_utils/rdma_utils.o 00:07:38.943 CC lib/idxd/idxd_kernel.o 00:07:38.943 CC lib/env_dpdk/env.o 00:07:38.943 CC lib/env_dpdk/memory.o 00:07:38.943 CC lib/json/json_parse.o 00:07:38.943 CC lib/json/json_util.o 00:07:38.943 CC lib/vmd/vmd.o 00:07:39.202 CC lib/vmd/led.o 00:07:39.202 LIB libspdk_conf.a 00:07:39.202 CC lib/json/json_write.o 00:07:39.202 CC lib/env_dpdk/pci.o 00:07:39.202 SO libspdk_conf.so.6.0 00:07:39.460 LIB libspdk_rdma_utils.a 00:07:39.460 CC lib/env_dpdk/init.o 00:07:39.460 SO libspdk_rdma_utils.so.1.0 00:07:39.460 SYMLINK libspdk_conf.so 00:07:39.460 CC lib/env_dpdk/threads.o 00:07:39.460 CC lib/env_dpdk/pci_ioat.o 00:07:39.460 SYMLINK libspdk_rdma_utils.so 00:07:39.460 CC lib/env_dpdk/pci_virtio.o 00:07:39.460 CC lib/env_dpdk/pci_vmd.o 00:07:39.460 CC lib/env_dpdk/pci_idxd.o 00:07:39.719 LIB libspdk_json.a 00:07:39.719 CC lib/env_dpdk/pci_event.o 00:07:39.719 SO libspdk_json.so.6.0 00:07:39.719 CC lib/env_dpdk/sigbus_handler.o 00:07:39.719 CC lib/rdma_provider/common.o 00:07:39.719 SYMLINK libspdk_json.so 00:07:39.719 CC lib/env_dpdk/pci_dpdk.o 00:07:39.719 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:39.719 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:39.719 LIB libspdk_idxd.a 00:07:39.977 SO libspdk_idxd.so.12.1 00:07:39.977 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:39.977 LIB libspdk_vmd.a 00:07:39.977 SO libspdk_vmd.so.6.0 00:07:39.977 SYMLINK libspdk_idxd.so 00:07:39.977 SYMLINK libspdk_vmd.so 00:07:39.977 LIB libspdk_rdma_provider.a 00:07:39.977 SO libspdk_rdma_provider.so.7.0 00:07:39.977 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:39.977 CC lib/jsonrpc/jsonrpc_server.o 00:07:39.977 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:39.977 CC lib/jsonrpc/jsonrpc_client.o 00:07:40.236 SYMLINK libspdk_rdma_provider.so 00:07:40.495 LIB libspdk_jsonrpc.a 00:07:40.495 SO libspdk_jsonrpc.so.6.0 00:07:40.495 SYMLINK libspdk_jsonrpc.so 00:07:40.754 CC lib/rpc/rpc.o 00:07:40.754 LIB libspdk_env_dpdk.a 00:07:41.013 SO libspdk_env_dpdk.so.15.1 00:07:41.013 LIB libspdk_rpc.a 00:07:41.013 SO libspdk_rpc.so.6.0 00:07:41.013 SYMLINK libspdk_env_dpdk.so 00:07:41.272 SYMLINK libspdk_rpc.so 00:07:41.531 CC lib/trace/trace.o 00:07:41.531 CC lib/trace/trace_rpc.o 00:07:41.531 CC lib/trace/trace_flags.o 00:07:41.531 CC lib/keyring/keyring.o 00:07:41.531 CC lib/keyring/keyring_rpc.o 00:07:41.531 CC lib/notify/notify.o 00:07:41.531 CC lib/notify/notify_rpc.o 00:07:41.531 LIB libspdk_notify.a 00:07:41.790 SO libspdk_notify.so.6.0 00:07:41.790 LIB libspdk_keyring.a 00:07:41.790 SYMLINK libspdk_notify.so 00:07:41.790 LIB libspdk_trace.a 00:07:41.790 SO libspdk_keyring.so.2.0 00:07:41.790 SO libspdk_trace.so.11.0 00:07:41.790 SYMLINK libspdk_keyring.so 00:07:41.790 SYMLINK libspdk_trace.so 00:07:42.048 CC lib/sock/sock.o 00:07:42.048 CC lib/sock/sock_rpc.o 00:07:42.048 CC lib/thread/thread.o 00:07:42.048 CC lib/thread/iobuf.o 00:07:42.652 LIB libspdk_sock.a 00:07:42.913 SO libspdk_sock.so.10.0 00:07:42.913 SYMLINK libspdk_sock.so 00:07:43.189 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:43.189 CC lib/nvme/nvme_ctrlr.o 00:07:43.189 CC lib/nvme/nvme_fabric.o 00:07:43.189 CC lib/nvme/nvme_ns_cmd.o 00:07:43.189 CC lib/nvme/nvme_ns.o 00:07:43.189 CC lib/nvme/nvme_pcie_common.o 00:07:43.189 CC lib/nvme/nvme_pcie.o 00:07:43.189 CC lib/nvme/nvme_qpair.o 00:07:43.189 CC lib/nvme/nvme.o 00:07:44.139 CC lib/nvme/nvme_quirks.o 00:07:44.139 CC lib/nvme/nvme_transport.o 00:07:44.139 LIB libspdk_thread.a 00:07:44.139 CC lib/nvme/nvme_discovery.o 00:07:44.139 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:44.139 SO libspdk_thread.so.11.0 00:07:44.139 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:44.398 CC lib/nvme/nvme_tcp.o 00:07:44.398 SYMLINK libspdk_thread.so 00:07:44.398 CC lib/nvme/nvme_opal.o 00:07:44.398 CC lib/accel/accel.o 00:07:44.656 CC lib/accel/accel_rpc.o 00:07:44.656 CC lib/nvme/nvme_io_msg.o 00:07:44.914 CC lib/accel/accel_sw.o 00:07:44.914 CC lib/nvme/nvme_poll_group.o 00:07:44.914 CC lib/nvme/nvme_zns.o 00:07:44.914 CC lib/nvme/nvme_stubs.o 00:07:44.914 CC lib/nvme/nvme_auth.o 00:07:45.173 CC lib/nvme/nvme_cuse.o 00:07:45.173 CC lib/nvme/nvme_rdma.o 00:07:45.739 CC lib/blob/blobstore.o 00:07:45.739 CC lib/init/json_config.o 00:07:45.739 CC lib/virtio/virtio.o 00:07:45.739 CC lib/fsdev/fsdev.o 00:07:45.739 LIB libspdk_accel.a 00:07:45.998 SO libspdk_accel.so.16.0 00:07:45.998 CC lib/init/subsystem.o 00:07:45.998 CC lib/init/subsystem_rpc.o 00:07:45.998 SYMLINK libspdk_accel.so 00:07:45.998 CC lib/blob/request.o 00:07:46.257 CC lib/init/rpc.o 00:07:46.257 CC lib/fsdev/fsdev_io.o 00:07:46.257 CC lib/virtio/virtio_vhost_user.o 00:07:46.257 CC lib/blob/zeroes.o 00:07:46.257 CC lib/virtio/virtio_vfio_user.o 00:07:46.257 CC lib/bdev/bdev.o 00:07:46.257 LIB libspdk_init.a 00:07:46.516 CC lib/bdev/bdev_rpc.o 00:07:46.516 SO libspdk_init.so.6.0 00:07:46.516 CC lib/bdev/bdev_zone.o 00:07:46.516 CC lib/blob/blob_bs_dev.o 00:07:46.516 SYMLINK libspdk_init.so 00:07:46.516 CC lib/fsdev/fsdev_rpc.o 00:07:46.516 CC lib/bdev/part.o 00:07:46.516 CC lib/virtio/virtio_pci.o 00:07:46.775 LIB libspdk_fsdev.a 00:07:46.775 CC lib/bdev/scsi_nvme.o 00:07:46.775 SO libspdk_fsdev.so.2.0 00:07:46.775 SYMLINK libspdk_fsdev.so 00:07:46.775 CC lib/event/reactor.o 00:07:46.775 CC lib/event/app.o 00:07:46.775 CC lib/event/log_rpc.o 00:07:46.775 CC lib/event/app_rpc.o 00:07:46.775 CC lib/event/scheduler_static.o 00:07:47.034 LIB libspdk_virtio.a 00:07:47.034 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:47.034 SO libspdk_virtio.so.7.0 00:07:47.034 LIB libspdk_nvme.a 00:07:47.034 SYMLINK libspdk_virtio.so 00:07:47.293 SO libspdk_nvme.so.15.0 00:07:47.552 LIB libspdk_event.a 00:07:47.552 SO libspdk_event.so.14.0 00:07:47.552 SYMLINK libspdk_nvme.so 00:07:47.552 SYMLINK libspdk_event.so 00:07:47.811 LIB libspdk_fuse_dispatcher.a 00:07:47.811 SO libspdk_fuse_dispatcher.so.1.0 00:07:47.811 SYMLINK libspdk_fuse_dispatcher.so 00:07:49.714 LIB libspdk_blob.a 00:07:49.973 LIB libspdk_bdev.a 00:07:49.973 SO libspdk_blob.so.11.0 00:07:49.973 SO libspdk_bdev.so.17.0 00:07:49.973 SYMLINK libspdk_blob.so 00:07:50.231 SYMLINK libspdk_bdev.so 00:07:50.231 CC lib/lvol/lvol.o 00:07:50.231 CC lib/blobfs/blobfs.o 00:07:50.231 CC lib/blobfs/tree.o 00:07:50.231 CC lib/nvmf/ctrlr.o 00:07:50.231 CC lib/nvmf/ctrlr_discovery.o 00:07:50.231 CC lib/nvmf/ctrlr_bdev.o 00:07:50.231 CC lib/ftl/ftl_core.o 00:07:50.231 CC lib/nbd/nbd.o 00:07:50.231 CC lib/scsi/dev.o 00:07:50.231 CC lib/ublk/ublk.o 00:07:50.490 CC lib/ftl/ftl_init.o 00:07:50.490 CC lib/scsi/lun.o 00:07:50.758 CC lib/scsi/port.o 00:07:50.758 CC lib/ftl/ftl_layout.o 00:07:50.758 CC lib/nbd/nbd_rpc.o 00:07:51.040 CC lib/scsi/scsi.o 00:07:51.040 CC lib/scsi/scsi_bdev.o 00:07:51.040 CC lib/scsi/scsi_pr.o 00:07:51.040 LIB libspdk_nbd.a 00:07:51.040 SO libspdk_nbd.so.7.0 00:07:51.040 CC lib/ublk/ublk_rpc.o 00:07:51.040 SYMLINK libspdk_nbd.so 00:07:51.040 CC lib/ftl/ftl_debug.o 00:07:51.040 CC lib/scsi/scsi_rpc.o 00:07:51.299 CC lib/nvmf/subsystem.o 00:07:51.299 CC lib/ftl/ftl_io.o 00:07:51.299 LIB libspdk_ublk.a 00:07:51.299 CC lib/nvmf/nvmf.o 00:07:51.299 CC lib/scsi/task.o 00:07:51.299 SO libspdk_ublk.so.3.0 00:07:51.299 LIB libspdk_blobfs.a 00:07:51.299 CC lib/nvmf/nvmf_rpc.o 00:07:51.299 SYMLINK libspdk_ublk.so 00:07:51.299 CC lib/nvmf/transport.o 00:07:51.558 SO libspdk_blobfs.so.10.0 00:07:51.558 CC lib/ftl/ftl_sb.o 00:07:51.558 LIB libspdk_lvol.a 00:07:51.558 SYMLINK libspdk_blobfs.so 00:07:51.558 CC lib/ftl/ftl_l2p.o 00:07:51.558 SO libspdk_lvol.so.10.0 00:07:51.558 CC lib/nvmf/tcp.o 00:07:51.558 LIB libspdk_scsi.a 00:07:51.558 SYMLINK libspdk_lvol.so 00:07:51.558 CC lib/nvmf/stubs.o 00:07:51.558 SO libspdk_scsi.so.9.0 00:07:51.817 CC lib/nvmf/mdns_server.o 00:07:51.817 SYMLINK libspdk_scsi.so 00:07:51.817 CC lib/nvmf/rdma.o 00:07:51.817 CC lib/ftl/ftl_l2p_flat.o 00:07:52.076 CC lib/ftl/ftl_nv_cache.o 00:07:52.076 CC lib/nvmf/auth.o 00:07:52.335 CC lib/iscsi/conn.o 00:07:52.335 CC lib/iscsi/init_grp.o 00:07:52.335 CC lib/ftl/ftl_band.o 00:07:52.335 CC lib/vhost/vhost.o 00:07:52.594 CC lib/vhost/vhost_rpc.o 00:07:52.853 CC lib/vhost/vhost_scsi.o 00:07:52.853 CC lib/vhost/vhost_blk.o 00:07:52.853 CC lib/ftl/ftl_band_ops.o 00:07:53.111 CC lib/vhost/rte_vhost_user.o 00:07:53.369 CC lib/iscsi/iscsi.o 00:07:53.369 CC lib/iscsi/param.o 00:07:53.369 CC lib/ftl/ftl_writer.o 00:07:53.369 CC lib/iscsi/portal_grp.o 00:07:53.628 CC lib/iscsi/tgt_node.o 00:07:53.628 CC lib/ftl/ftl_rq.o 00:07:53.628 CC lib/iscsi/iscsi_subsystem.o 00:07:53.628 CC lib/ftl/ftl_reloc.o 00:07:53.886 CC lib/iscsi/iscsi_rpc.o 00:07:53.886 CC lib/iscsi/task.o 00:07:53.886 CC lib/ftl/ftl_l2p_cache.o 00:07:54.145 CC lib/ftl/ftl_p2l.o 00:07:54.145 CC lib/ftl/ftl_p2l_log.o 00:07:54.145 CC lib/ftl/mngt/ftl_mngt.o 00:07:54.145 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:54.145 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:54.145 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:54.403 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:54.403 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:54.403 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:54.403 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:54.403 LIB libspdk_nvmf.a 00:07:54.403 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:54.403 LIB libspdk_vhost.a 00:07:54.403 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:54.403 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:54.662 SO libspdk_vhost.so.8.0 00:07:54.662 SO libspdk_nvmf.so.20.0 00:07:54.662 SYMLINK libspdk_vhost.so 00:07:54.662 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:54.662 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:54.662 CC lib/ftl/utils/ftl_conf.o 00:07:54.662 CC lib/ftl/utils/ftl_md.o 00:07:54.662 CC lib/ftl/utils/ftl_mempool.o 00:07:54.662 CC lib/ftl/utils/ftl_bitmap.o 00:07:54.662 CC lib/ftl/utils/ftl_property.o 00:07:54.920 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:54.920 SYMLINK libspdk_nvmf.so 00:07:54.920 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:54.920 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:54.920 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:54.920 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:54.920 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:54.920 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:54.920 LIB libspdk_iscsi.a 00:07:55.179 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:55.179 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:55.179 SO libspdk_iscsi.so.8.0 00:07:55.179 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:55.179 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:55.179 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:55.179 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:55.179 CC lib/ftl/base/ftl_base_dev.o 00:07:55.179 CC lib/ftl/base/ftl_base_bdev.o 00:07:55.438 CC lib/ftl/ftl_trace.o 00:07:55.438 SYMLINK libspdk_iscsi.so 00:07:55.696 LIB libspdk_ftl.a 00:07:55.955 SO libspdk_ftl.so.9.0 00:07:56.214 SYMLINK libspdk_ftl.so 00:07:56.472 CC module/env_dpdk/env_dpdk_rpc.o 00:07:56.731 CC module/accel/dsa/accel_dsa.o 00:07:56.731 CC module/accel/ioat/accel_ioat.o 00:07:56.731 CC module/sock/posix/posix.o 00:07:56.731 CC module/accel/error/accel_error.o 00:07:56.731 CC module/keyring/linux/keyring.o 00:07:56.731 CC module/fsdev/aio/fsdev_aio.o 00:07:56.731 CC module/keyring/file/keyring.o 00:07:56.731 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:56.731 CC module/blob/bdev/blob_bdev.o 00:07:56.731 LIB libspdk_env_dpdk_rpc.a 00:07:56.731 SO libspdk_env_dpdk_rpc.so.6.0 00:07:56.731 SYMLINK libspdk_env_dpdk_rpc.so 00:07:56.731 CC module/accel/ioat/accel_ioat_rpc.o 00:07:56.731 CC module/keyring/linux/keyring_rpc.o 00:07:56.731 CC module/keyring/file/keyring_rpc.o 00:07:56.991 CC module/accel/error/accel_error_rpc.o 00:07:56.991 LIB libspdk_scheduler_dynamic.a 00:07:56.991 SO libspdk_scheduler_dynamic.so.4.0 00:07:56.991 LIB libspdk_accel_ioat.a 00:07:56.991 CC module/accel/dsa/accel_dsa_rpc.o 00:07:56.991 LIB libspdk_keyring_linux.a 00:07:56.991 SO libspdk_accel_ioat.so.6.0 00:07:56.991 LIB libspdk_blob_bdev.a 00:07:56.991 LIB libspdk_keyring_file.a 00:07:56.991 SO libspdk_keyring_linux.so.1.0 00:07:56.991 SYMLINK libspdk_scheduler_dynamic.so 00:07:56.991 SO libspdk_blob_bdev.so.11.0 00:07:56.991 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:56.991 LIB libspdk_accel_error.a 00:07:56.991 SO libspdk_keyring_file.so.2.0 00:07:56.991 SYMLINK libspdk_keyring_linux.so 00:07:56.991 SYMLINK libspdk_accel_ioat.so 00:07:56.991 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:56.991 CC module/fsdev/aio/linux_aio_mgr.o 00:07:56.991 SO libspdk_accel_error.so.2.0 00:07:56.991 SYMLINK libspdk_blob_bdev.so 00:07:57.250 SYMLINK libspdk_keyring_file.so 00:07:57.250 LIB libspdk_accel_dsa.a 00:07:57.250 SO libspdk_accel_dsa.so.5.0 00:07:57.250 SYMLINK libspdk_accel_error.so 00:07:57.250 CC module/scheduler/gscheduler/gscheduler.o 00:07:57.250 LIB libspdk_scheduler_dpdk_governor.a 00:07:57.250 SYMLINK libspdk_accel_dsa.so 00:07:57.250 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:57.250 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:57.509 CC module/accel/iaa/accel_iaa.o 00:07:57.509 LIB libspdk_scheduler_gscheduler.a 00:07:57.509 CC module/blobfs/bdev/blobfs_bdev.o 00:07:57.509 SO libspdk_scheduler_gscheduler.so.4.0 00:07:57.509 CC module/bdev/gpt/gpt.o 00:07:57.509 CC module/bdev/delay/vbdev_delay.o 00:07:57.509 CC module/bdev/error/vbdev_error.o 00:07:57.509 SYMLINK libspdk_scheduler_gscheduler.so 00:07:57.509 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:57.509 LIB libspdk_fsdev_aio.a 00:07:57.509 CC module/bdev/malloc/bdev_malloc.o 00:07:57.509 CC module/bdev/lvol/vbdev_lvol.o 00:07:57.509 SO libspdk_fsdev_aio.so.1.0 00:07:57.509 LIB libspdk_sock_posix.a 00:07:57.509 SO libspdk_sock_posix.so.6.0 00:07:57.767 SYMLINK libspdk_fsdev_aio.so 00:07:57.767 CC module/accel/iaa/accel_iaa_rpc.o 00:07:57.767 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:57.767 CC module/bdev/gpt/vbdev_gpt.o 00:07:57.767 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:57.767 SYMLINK libspdk_sock_posix.so 00:07:57.767 CC module/bdev/error/vbdev_error_rpc.o 00:07:57.767 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:57.767 LIB libspdk_accel_iaa.a 00:07:57.767 SO libspdk_accel_iaa.so.3.0 00:07:58.027 SYMLINK libspdk_accel_iaa.so 00:07:58.027 LIB libspdk_bdev_error.a 00:07:58.027 LIB libspdk_bdev_delay.a 00:07:58.027 LIB libspdk_blobfs_bdev.a 00:07:58.027 SO libspdk_bdev_error.so.6.0 00:07:58.027 CC module/bdev/null/bdev_null.o 00:07:58.027 SO libspdk_bdev_delay.so.6.0 00:07:58.027 SO libspdk_blobfs_bdev.so.6.0 00:07:58.027 LIB libspdk_bdev_gpt.a 00:07:58.027 LIB libspdk_bdev_malloc.a 00:07:58.027 SO libspdk_bdev_gpt.so.6.0 00:07:58.027 SYMLINK libspdk_bdev_delay.so 00:07:58.027 SYMLINK libspdk_bdev_error.so 00:07:58.027 SO libspdk_bdev_malloc.so.6.0 00:07:58.027 SYMLINK libspdk_blobfs_bdev.so 00:07:58.027 CC module/bdev/nvme/bdev_nvme.o 00:07:58.027 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:58.027 CC module/bdev/passthru/vbdev_passthru.o 00:07:58.027 SYMLINK libspdk_bdev_gpt.so 00:07:58.027 CC module/bdev/nvme/nvme_rpc.o 00:07:58.286 SYMLINK libspdk_bdev_malloc.so 00:07:58.286 CC module/bdev/null/bdev_null_rpc.o 00:07:58.286 LIB libspdk_bdev_lvol.a 00:07:58.286 CC module/bdev/raid/bdev_raid.o 00:07:58.286 SO libspdk_bdev_lvol.so.6.0 00:07:58.286 CC module/bdev/split/vbdev_split.o 00:07:58.286 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:58.286 CC module/bdev/split/vbdev_split_rpc.o 00:07:58.286 SYMLINK libspdk_bdev_lvol.so 00:07:58.286 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:58.286 LIB libspdk_bdev_null.a 00:07:58.550 CC module/bdev/nvme/bdev_mdns_client.o 00:07:58.550 SO libspdk_bdev_null.so.6.0 00:07:58.550 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:58.550 CC module/bdev/raid/bdev_raid_rpc.o 00:07:58.550 CC module/bdev/raid/bdev_raid_sb.o 00:07:58.550 LIB libspdk_bdev_split.a 00:07:58.550 SYMLINK libspdk_bdev_null.so 00:07:58.550 SO libspdk_bdev_split.so.6.0 00:07:58.550 SYMLINK libspdk_bdev_split.so 00:07:58.550 LIB libspdk_bdev_passthru.a 00:07:58.550 LIB libspdk_bdev_zone_block.a 00:07:58.550 CC module/bdev/xnvme/bdev_xnvme.o 00:07:58.809 SO libspdk_bdev_passthru.so.6.0 00:07:58.809 SO libspdk_bdev_zone_block.so.6.0 00:07:58.810 SYMLINK libspdk_bdev_passthru.so 00:07:58.810 SYMLINK libspdk_bdev_zone_block.so 00:07:58.810 CC module/bdev/nvme/vbdev_opal.o 00:07:58.810 CC module/bdev/aio/bdev_aio.o 00:07:58.810 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:58.810 CC module/bdev/ftl/bdev_ftl.o 00:07:59.068 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:07:59.068 CC module/bdev/iscsi/bdev_iscsi.o 00:07:59.068 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:59.068 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:59.068 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:59.068 CC module/bdev/raid/raid0.o 00:07:59.068 LIB libspdk_bdev_xnvme.a 00:07:59.068 SO libspdk_bdev_xnvme.so.3.0 00:07:59.068 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:59.327 CC module/bdev/aio/bdev_aio_rpc.o 00:07:59.327 SYMLINK libspdk_bdev_xnvme.so 00:07:59.327 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:59.327 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:59.327 CC module/bdev/raid/raid1.o 00:07:59.327 LIB libspdk_bdev_aio.a 00:07:59.327 SO libspdk_bdev_aio.so.6.0 00:07:59.327 CC module/bdev/raid/concat.o 00:07:59.585 LIB libspdk_bdev_ftl.a 00:07:59.585 LIB libspdk_bdev_iscsi.a 00:07:59.586 SO libspdk_bdev_ftl.so.6.0 00:07:59.586 SO libspdk_bdev_iscsi.so.6.0 00:07:59.586 SYMLINK libspdk_bdev_aio.so 00:07:59.586 SYMLINK libspdk_bdev_ftl.so 00:07:59.586 SYMLINK libspdk_bdev_iscsi.so 00:07:59.586 LIB libspdk_bdev_virtio.a 00:07:59.845 LIB libspdk_bdev_raid.a 00:07:59.845 SO libspdk_bdev_virtio.so.6.0 00:07:59.845 SO libspdk_bdev_raid.so.6.0 00:07:59.845 SYMLINK libspdk_bdev_virtio.so 00:07:59.845 SYMLINK libspdk_bdev_raid.so 00:08:01.748 LIB libspdk_bdev_nvme.a 00:08:01.748 SO libspdk_bdev_nvme.so.7.1 00:08:01.748 SYMLINK libspdk_bdev_nvme.so 00:08:02.316 CC module/event/subsystems/iobuf/iobuf.o 00:08:02.316 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:02.316 CC module/event/subsystems/keyring/keyring.o 00:08:02.316 CC module/event/subsystems/vmd/vmd.o 00:08:02.316 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:02.316 CC module/event/subsystems/sock/sock.o 00:08:02.316 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:02.316 CC module/event/subsystems/fsdev/fsdev.o 00:08:02.316 CC module/event/subsystems/scheduler/scheduler.o 00:08:02.316 LIB libspdk_event_keyring.a 00:08:02.316 LIB libspdk_event_vmd.a 00:08:02.316 LIB libspdk_event_vhost_blk.a 00:08:02.316 LIB libspdk_event_sock.a 00:08:02.316 LIB libspdk_event_scheduler.a 00:08:02.316 LIB libspdk_event_fsdev.a 00:08:02.575 LIB libspdk_event_iobuf.a 00:08:02.575 SO libspdk_event_keyring.so.1.0 00:08:02.575 SO libspdk_event_vhost_blk.so.3.0 00:08:02.575 SO libspdk_event_vmd.so.6.0 00:08:02.575 SO libspdk_event_scheduler.so.4.0 00:08:02.575 SO libspdk_event_fsdev.so.1.0 00:08:02.575 SO libspdk_event_sock.so.5.0 00:08:02.575 SO libspdk_event_iobuf.so.3.0 00:08:02.575 SYMLINK libspdk_event_vhost_blk.so 00:08:02.575 SYMLINK libspdk_event_keyring.so 00:08:02.575 SYMLINK libspdk_event_fsdev.so 00:08:02.575 SYMLINK libspdk_event_scheduler.so 00:08:02.575 SYMLINK libspdk_event_sock.so 00:08:02.575 SYMLINK libspdk_event_vmd.so 00:08:02.575 SYMLINK libspdk_event_iobuf.so 00:08:02.835 CC module/event/subsystems/accel/accel.o 00:08:03.094 LIB libspdk_event_accel.a 00:08:03.094 SO libspdk_event_accel.so.6.0 00:08:03.094 SYMLINK libspdk_event_accel.so 00:08:03.353 CC module/event/subsystems/bdev/bdev.o 00:08:03.612 LIB libspdk_event_bdev.a 00:08:03.612 SO libspdk_event_bdev.so.6.0 00:08:03.870 SYMLINK libspdk_event_bdev.so 00:08:04.129 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:04.129 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:04.129 CC module/event/subsystems/ublk/ublk.o 00:08:04.129 CC module/event/subsystems/nbd/nbd.o 00:08:04.129 CC module/event/subsystems/scsi/scsi.o 00:08:04.129 LIB libspdk_event_ublk.a 00:08:04.129 LIB libspdk_event_nbd.a 00:08:04.388 SO libspdk_event_ublk.so.3.0 00:08:04.388 LIB libspdk_event_scsi.a 00:08:04.388 SO libspdk_event_nbd.so.6.0 00:08:04.388 SO libspdk_event_scsi.so.6.0 00:08:04.388 SYMLINK libspdk_event_ublk.so 00:08:04.388 SYMLINK libspdk_event_nbd.so 00:08:04.388 SYMLINK libspdk_event_scsi.so 00:08:04.388 LIB libspdk_event_nvmf.a 00:08:04.388 SO libspdk_event_nvmf.so.6.0 00:08:04.388 SYMLINK libspdk_event_nvmf.so 00:08:04.647 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:04.647 CC module/event/subsystems/iscsi/iscsi.o 00:08:04.905 LIB libspdk_event_vhost_scsi.a 00:08:04.905 LIB libspdk_event_iscsi.a 00:08:04.905 SO libspdk_event_vhost_scsi.so.3.0 00:08:04.905 SO libspdk_event_iscsi.so.6.0 00:08:04.905 SYMLINK libspdk_event_vhost_scsi.so 00:08:04.905 SYMLINK libspdk_event_iscsi.so 00:08:05.164 SO libspdk.so.6.0 00:08:05.164 SYMLINK libspdk.so 00:08:05.422 CC app/trace_record/trace_record.o 00:08:05.422 CXX app/trace/trace.o 00:08:05.422 CC app/spdk_lspci/spdk_lspci.o 00:08:05.422 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:05.422 CC app/iscsi_tgt/iscsi_tgt.o 00:08:05.422 CC app/nvmf_tgt/nvmf_main.o 00:08:05.681 CC app/spdk_tgt/spdk_tgt.o 00:08:05.681 CC examples/util/zipf/zipf.o 00:08:05.681 CC examples/ioat/perf/perf.o 00:08:05.681 CC test/thread/poller_perf/poller_perf.o 00:08:05.681 LINK spdk_lspci 00:08:05.681 LINK interrupt_tgt 00:08:05.940 LINK nvmf_tgt 00:08:05.940 LINK zipf 00:08:05.940 LINK iscsi_tgt 00:08:05.940 LINK poller_perf 00:08:05.940 LINK spdk_trace_record 00:08:05.940 LINK spdk_tgt 00:08:05.940 LINK ioat_perf 00:08:05.940 CC examples/ioat/verify/verify.o 00:08:05.940 LINK spdk_trace 00:08:06.199 CC app/spdk_nvme_perf/perf.o 00:08:06.199 CC app/spdk_nvme_identify/identify.o 00:08:06.199 CC app/spdk_nvme_discover/discovery_aer.o 00:08:06.199 TEST_HEADER include/spdk/accel.h 00:08:06.199 TEST_HEADER include/spdk/accel_module.h 00:08:06.199 TEST_HEADER include/spdk/assert.h 00:08:06.199 CC examples/sock/hello_world/hello_sock.o 00:08:06.199 TEST_HEADER include/spdk/barrier.h 00:08:06.199 TEST_HEADER include/spdk/base64.h 00:08:06.199 TEST_HEADER include/spdk/bdev.h 00:08:06.199 CC test/dma/test_dma/test_dma.o 00:08:06.199 TEST_HEADER include/spdk/bdev_module.h 00:08:06.199 TEST_HEADER include/spdk/bdev_zone.h 00:08:06.199 TEST_HEADER include/spdk/bit_array.h 00:08:06.199 TEST_HEADER include/spdk/bit_pool.h 00:08:06.199 TEST_HEADER include/spdk/blob_bdev.h 00:08:06.199 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:06.199 TEST_HEADER include/spdk/blobfs.h 00:08:06.199 TEST_HEADER include/spdk/blob.h 00:08:06.199 TEST_HEADER include/spdk/conf.h 00:08:06.199 LINK verify 00:08:06.199 TEST_HEADER include/spdk/config.h 00:08:06.199 TEST_HEADER include/spdk/cpuset.h 00:08:06.199 TEST_HEADER include/spdk/crc16.h 00:08:06.199 TEST_HEADER include/spdk/crc32.h 00:08:06.199 TEST_HEADER include/spdk/crc64.h 00:08:06.199 TEST_HEADER include/spdk/dif.h 00:08:06.199 TEST_HEADER include/spdk/dma.h 00:08:06.199 TEST_HEADER include/spdk/endian.h 00:08:06.199 TEST_HEADER include/spdk/env_dpdk.h 00:08:06.199 CC examples/thread/thread/thread_ex.o 00:08:06.199 TEST_HEADER include/spdk/env.h 00:08:06.199 TEST_HEADER include/spdk/event.h 00:08:06.199 TEST_HEADER include/spdk/fd_group.h 00:08:06.199 TEST_HEADER include/spdk/fd.h 00:08:06.199 TEST_HEADER include/spdk/file.h 00:08:06.199 TEST_HEADER include/spdk/fsdev.h 00:08:06.199 TEST_HEADER include/spdk/fsdev_module.h 00:08:06.199 TEST_HEADER include/spdk/ftl.h 00:08:06.199 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:06.199 TEST_HEADER include/spdk/gpt_spec.h 00:08:06.199 TEST_HEADER include/spdk/hexlify.h 00:08:06.199 TEST_HEADER include/spdk/histogram_data.h 00:08:06.199 TEST_HEADER include/spdk/idxd.h 00:08:06.199 TEST_HEADER include/spdk/idxd_spec.h 00:08:06.199 TEST_HEADER include/spdk/init.h 00:08:06.199 TEST_HEADER include/spdk/ioat.h 00:08:06.199 TEST_HEADER include/spdk/ioat_spec.h 00:08:06.199 TEST_HEADER include/spdk/iscsi_spec.h 00:08:06.199 CC test/app/bdev_svc/bdev_svc.o 00:08:06.199 TEST_HEADER include/spdk/json.h 00:08:06.199 TEST_HEADER include/spdk/jsonrpc.h 00:08:06.199 TEST_HEADER include/spdk/keyring.h 00:08:06.199 TEST_HEADER include/spdk/keyring_module.h 00:08:06.199 TEST_HEADER include/spdk/likely.h 00:08:06.199 TEST_HEADER include/spdk/log.h 00:08:06.199 TEST_HEADER include/spdk/lvol.h 00:08:06.199 TEST_HEADER include/spdk/md5.h 00:08:06.199 TEST_HEADER include/spdk/memory.h 00:08:06.199 TEST_HEADER include/spdk/mmio.h 00:08:06.199 TEST_HEADER include/spdk/nbd.h 00:08:06.199 TEST_HEADER include/spdk/net.h 00:08:06.199 TEST_HEADER include/spdk/notify.h 00:08:06.458 TEST_HEADER include/spdk/nvme.h 00:08:06.458 TEST_HEADER include/spdk/nvme_intel.h 00:08:06.458 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:06.458 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:06.458 TEST_HEADER include/spdk/nvme_spec.h 00:08:06.458 TEST_HEADER include/spdk/nvme_zns.h 00:08:06.458 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:06.458 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:06.458 TEST_HEADER include/spdk/nvmf.h 00:08:06.458 TEST_HEADER include/spdk/nvmf_spec.h 00:08:06.458 TEST_HEADER include/spdk/nvmf_transport.h 00:08:06.458 TEST_HEADER include/spdk/opal.h 00:08:06.458 TEST_HEADER include/spdk/opal_spec.h 00:08:06.458 TEST_HEADER include/spdk/pci_ids.h 00:08:06.458 TEST_HEADER include/spdk/pipe.h 00:08:06.458 TEST_HEADER include/spdk/queue.h 00:08:06.458 TEST_HEADER include/spdk/reduce.h 00:08:06.458 TEST_HEADER include/spdk/rpc.h 00:08:06.458 TEST_HEADER include/spdk/scheduler.h 00:08:06.458 TEST_HEADER include/spdk/scsi.h 00:08:06.458 TEST_HEADER include/spdk/scsi_spec.h 00:08:06.458 TEST_HEADER include/spdk/sock.h 00:08:06.458 TEST_HEADER include/spdk/stdinc.h 00:08:06.458 TEST_HEADER include/spdk/string.h 00:08:06.458 TEST_HEADER include/spdk/thread.h 00:08:06.458 TEST_HEADER include/spdk/trace.h 00:08:06.458 TEST_HEADER include/spdk/trace_parser.h 00:08:06.458 TEST_HEADER include/spdk/tree.h 00:08:06.458 TEST_HEADER include/spdk/ublk.h 00:08:06.458 TEST_HEADER include/spdk/util.h 00:08:06.458 TEST_HEADER include/spdk/uuid.h 00:08:06.458 TEST_HEADER include/spdk/version.h 00:08:06.458 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:06.458 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:06.458 TEST_HEADER include/spdk/vhost.h 00:08:06.458 CC test/env/mem_callbacks/mem_callbacks.o 00:08:06.458 TEST_HEADER include/spdk/vmd.h 00:08:06.458 TEST_HEADER include/spdk/xor.h 00:08:06.458 TEST_HEADER include/spdk/zipf.h 00:08:06.458 CXX test/cpp_headers/accel.o 00:08:06.458 LINK spdk_nvme_discover 00:08:06.458 CC app/spdk_top/spdk_top.o 00:08:06.458 LINK bdev_svc 00:08:06.458 LINK hello_sock 00:08:06.717 LINK thread 00:08:06.717 CXX test/cpp_headers/accel_module.o 00:08:06.717 CC test/env/vtophys/vtophys.o 00:08:06.717 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:06.717 LINK test_dma 00:08:06.717 CXX test/cpp_headers/assert.o 00:08:06.975 LINK vtophys 00:08:06.975 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:06.975 CC examples/vmd/lsvmd/lsvmd.o 00:08:06.975 LINK env_dpdk_post_init 00:08:06.975 CXX test/cpp_headers/barrier.o 00:08:06.975 CXX test/cpp_headers/base64.o 00:08:06.975 LINK mem_callbacks 00:08:07.234 CC examples/vmd/led/led.o 00:08:07.234 LINK lsvmd 00:08:07.234 LINK spdk_nvme_perf 00:08:07.234 CXX test/cpp_headers/bdev.o 00:08:07.234 LINK spdk_nvme_identify 00:08:07.234 CXX test/cpp_headers/bdev_module.o 00:08:07.234 CXX test/cpp_headers/bdev_zone.o 00:08:07.234 CXX test/cpp_headers/bit_array.o 00:08:07.234 LINK led 00:08:07.234 CC test/env/memory/memory_ut.o 00:08:07.234 CXX test/cpp_headers/bit_pool.o 00:08:07.234 CXX test/cpp_headers/blob_bdev.o 00:08:07.234 CXX test/cpp_headers/blobfs_bdev.o 00:08:07.492 LINK nvme_fuzz 00:08:07.492 CC test/app/histogram_perf/histogram_perf.o 00:08:07.492 CXX test/cpp_headers/blobfs.o 00:08:07.492 CC test/app/jsoncat/jsoncat.o 00:08:07.492 CC examples/idxd/perf/perf.o 00:08:07.750 CC test/app/stub/stub.o 00:08:07.750 CC examples/accel/perf/accel_perf.o 00:08:07.750 LINK spdk_top 00:08:07.750 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:07.750 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:07.750 LINK histogram_perf 00:08:07.750 CXX test/cpp_headers/blob.o 00:08:07.750 LINK jsoncat 00:08:07.750 LINK stub 00:08:08.008 CXX test/cpp_headers/conf.o 00:08:08.009 CC app/spdk_dd/spdk_dd.o 00:08:08.009 CC app/vhost/vhost.o 00:08:08.009 CXX test/cpp_headers/config.o 00:08:08.009 LINK idxd_perf 00:08:08.009 CXX test/cpp_headers/cpuset.o 00:08:08.009 LINK hello_fsdev 00:08:08.009 CXX test/cpp_headers/crc16.o 00:08:08.267 CC app/fio/nvme/fio_plugin.o 00:08:08.267 CXX test/cpp_headers/crc32.o 00:08:08.267 LINK vhost 00:08:08.267 CXX test/cpp_headers/crc64.o 00:08:08.267 LINK accel_perf 00:08:08.267 CXX test/cpp_headers/dif.o 00:08:08.267 CXX test/cpp_headers/dma.o 00:08:08.267 CC app/fio/bdev/fio_plugin.o 00:08:08.525 LINK spdk_dd 00:08:08.525 CXX test/cpp_headers/endian.o 00:08:08.525 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:08.525 CC examples/blob/hello_world/hello_blob.o 00:08:08.525 CC examples/nvme/hello_world/hello_world.o 00:08:08.783 CC examples/bdev/hello_world/hello_bdev.o 00:08:08.783 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:08.783 CC examples/nvme/reconnect/reconnect.o 00:08:08.783 LINK memory_ut 00:08:08.783 CXX test/cpp_headers/env_dpdk.o 00:08:08.783 LINK spdk_nvme 00:08:09.041 LINK hello_world 00:08:09.041 LINK hello_blob 00:08:09.041 CXX test/cpp_headers/env.o 00:08:09.041 LINK hello_bdev 00:08:09.041 LINK spdk_bdev 00:08:09.041 CC test/env/pci/pci_ut.o 00:08:09.041 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:09.041 CXX test/cpp_headers/event.o 00:08:09.299 LINK reconnect 00:08:09.299 CC examples/nvme/arbitration/arbitration.o 00:08:09.299 CC examples/blob/cli/blobcli.o 00:08:09.299 LINK vhost_fuzz 00:08:09.299 CC examples/bdev/bdevperf/bdevperf.o 00:08:09.300 CC test/event/event_perf/event_perf.o 00:08:09.300 CXX test/cpp_headers/fd_group.o 00:08:09.300 CXX test/cpp_headers/fd.o 00:08:09.300 CXX test/cpp_headers/file.o 00:08:09.557 LINK event_perf 00:08:09.557 LINK pci_ut 00:08:09.557 CXX test/cpp_headers/fsdev.o 00:08:09.557 LINK arbitration 00:08:09.557 CC test/event/reactor/reactor.o 00:08:09.557 CXX test/cpp_headers/fsdev_module.o 00:08:09.816 CC test/event/reactor_perf/reactor_perf.o 00:08:09.816 LINK nvme_manage 00:08:09.816 LINK reactor 00:08:09.816 CXX test/cpp_headers/ftl.o 00:08:09.816 CC test/event/app_repeat/app_repeat.o 00:08:09.816 LINK blobcli 00:08:09.816 LINK reactor_perf 00:08:09.816 LINK iscsi_fuzz 00:08:09.816 CC test/event/scheduler/scheduler.o 00:08:10.074 CC test/nvme/aer/aer.o 00:08:10.074 CC examples/nvme/hotplug/hotplug.o 00:08:10.074 CC test/nvme/reset/reset.o 00:08:10.074 LINK app_repeat 00:08:10.074 CXX test/cpp_headers/fuse_dispatcher.o 00:08:10.074 CC test/rpc_client/rpc_client_test.o 00:08:10.074 CXX test/cpp_headers/gpt_spec.o 00:08:10.074 LINK scheduler 00:08:10.332 LINK hotplug 00:08:10.332 CC test/accel/dif/dif.o 00:08:10.333 CXX test/cpp_headers/hexlify.o 00:08:10.333 LINK bdevperf 00:08:10.333 LINK reset 00:08:10.333 LINK rpc_client_test 00:08:10.333 LINK aer 00:08:10.333 CXX test/cpp_headers/histogram_data.o 00:08:10.333 CC test/blobfs/mkfs/mkfs.o 00:08:10.591 CXX test/cpp_headers/idxd.o 00:08:10.591 CC test/lvol/esnap/esnap.o 00:08:10.591 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:10.591 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:10.591 CC test/nvme/e2edp/nvme_dp.o 00:08:10.591 CC examples/nvme/abort/abort.o 00:08:10.591 CC test/nvme/sgl/sgl.o 00:08:10.591 LINK mkfs 00:08:10.591 CC test/nvme/overhead/overhead.o 00:08:10.591 CXX test/cpp_headers/idxd_spec.o 00:08:10.851 LINK cmb_copy 00:08:10.851 LINK pmr_persistence 00:08:10.851 CXX test/cpp_headers/init.o 00:08:10.851 LINK sgl 00:08:10.851 LINK nvme_dp 00:08:11.109 LINK overhead 00:08:11.109 CC test/nvme/err_injection/err_injection.o 00:08:11.109 CC test/nvme/startup/startup.o 00:08:11.109 LINK abort 00:08:11.109 CC test/nvme/reserve/reserve.o 00:08:11.109 CXX test/cpp_headers/ioat.o 00:08:11.109 CXX test/cpp_headers/ioat_spec.o 00:08:11.109 LINK dif 00:08:11.109 CC test/nvme/simple_copy/simple_copy.o 00:08:11.368 LINK err_injection 00:08:11.368 LINK startup 00:08:11.368 CXX test/cpp_headers/iscsi_spec.o 00:08:11.368 CC test/nvme/connect_stress/connect_stress.o 00:08:11.368 LINK reserve 00:08:11.368 CC test/nvme/boot_partition/boot_partition.o 00:08:11.368 CXX test/cpp_headers/json.o 00:08:11.368 CC examples/nvmf/nvmf/nvmf.o 00:08:11.368 CC test/nvme/compliance/nvme_compliance.o 00:08:11.368 LINK simple_copy 00:08:11.627 LINK connect_stress 00:08:11.627 CC test/nvme/fused_ordering/fused_ordering.o 00:08:11.627 LINK boot_partition 00:08:11.627 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:11.627 CXX test/cpp_headers/jsonrpc.o 00:08:11.627 CC test/bdev/bdevio/bdevio.o 00:08:11.627 CXX test/cpp_headers/keyring.o 00:08:11.887 CC test/nvme/fdp/fdp.o 00:08:11.887 CC test/nvme/cuse/cuse.o 00:08:11.887 LINK fused_ordering 00:08:11.887 CXX test/cpp_headers/keyring_module.o 00:08:11.887 LINK nvmf 00:08:11.887 LINK doorbell_aers 00:08:11.887 CXX test/cpp_headers/likely.o 00:08:11.887 LINK nvme_compliance 00:08:11.887 CXX test/cpp_headers/log.o 00:08:12.146 CXX test/cpp_headers/lvol.o 00:08:12.146 CXX test/cpp_headers/md5.o 00:08:12.146 CXX test/cpp_headers/memory.o 00:08:12.146 CXX test/cpp_headers/mmio.o 00:08:12.146 CXX test/cpp_headers/nbd.o 00:08:12.146 CXX test/cpp_headers/net.o 00:08:12.146 LINK bdevio 00:08:12.146 CXX test/cpp_headers/notify.o 00:08:12.146 LINK fdp 00:08:12.146 CXX test/cpp_headers/nvme.o 00:08:12.146 CXX test/cpp_headers/nvme_intel.o 00:08:12.406 CXX test/cpp_headers/nvme_ocssd.o 00:08:12.406 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:12.406 CXX test/cpp_headers/nvme_spec.o 00:08:12.406 CXX test/cpp_headers/nvme_zns.o 00:08:12.406 CXX test/cpp_headers/nvmf_cmd.o 00:08:12.406 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:12.406 CXX test/cpp_headers/nvmf.o 00:08:12.406 CXX test/cpp_headers/nvmf_spec.o 00:08:12.406 CXX test/cpp_headers/opal.o 00:08:12.406 CXX test/cpp_headers/nvmf_transport.o 00:08:12.665 CXX test/cpp_headers/opal_spec.o 00:08:12.665 CXX test/cpp_headers/pci_ids.o 00:08:12.665 CXX test/cpp_headers/pipe.o 00:08:12.665 CXX test/cpp_headers/queue.o 00:08:12.665 CXX test/cpp_headers/reduce.o 00:08:12.665 CXX test/cpp_headers/rpc.o 00:08:12.666 CXX test/cpp_headers/scheduler.o 00:08:12.666 CXX test/cpp_headers/scsi.o 00:08:12.666 CXX test/cpp_headers/scsi_spec.o 00:08:12.925 CXX test/cpp_headers/sock.o 00:08:12.925 CXX test/cpp_headers/stdinc.o 00:08:12.925 CXX test/cpp_headers/string.o 00:08:12.925 CXX test/cpp_headers/thread.o 00:08:12.925 CXX test/cpp_headers/trace.o 00:08:12.925 CXX test/cpp_headers/trace_parser.o 00:08:12.925 CXX test/cpp_headers/tree.o 00:08:12.925 CXX test/cpp_headers/ublk.o 00:08:12.925 CXX test/cpp_headers/util.o 00:08:12.925 CXX test/cpp_headers/uuid.o 00:08:12.925 CXX test/cpp_headers/version.o 00:08:12.925 CXX test/cpp_headers/vfio_user_pci.o 00:08:13.185 CXX test/cpp_headers/vfio_user_spec.o 00:08:13.185 CXX test/cpp_headers/vhost.o 00:08:13.185 CXX test/cpp_headers/vmd.o 00:08:13.185 CXX test/cpp_headers/xor.o 00:08:13.185 CXX test/cpp_headers/zipf.o 00:08:13.444 LINK cuse 00:08:17.646 LINK esnap 00:08:18.211 00:08:18.211 real 1m42.000s 00:08:18.211 user 9m18.928s 00:08:18.211 sys 1m54.335s 00:08:18.211 10:14:12 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:18.211 10:14:12 make -- common/autotest_common.sh@10 -- $ set +x 00:08:18.211 ************************************ 00:08:18.211 END TEST make 00:08:18.211 ************************************ 00:08:18.211 10:14:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:18.211 10:14:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:18.211 10:14:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:18.211 10:14:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:18.211 10:14:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:18.211 10:14:12 -- pm/common@44 -- $ pid=5335 00:08:18.211 10:14:12 -- pm/common@50 -- $ kill -TERM 5335 00:08:18.211 10:14:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:18.211 10:14:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:18.211 10:14:12 -- pm/common@44 -- $ pid=5337 00:08:18.211 10:14:12 -- pm/common@50 -- $ kill -TERM 5337 00:08:18.211 10:14:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:18.211 10:14:12 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:18.211 10:14:12 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.211 10:14:12 -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.211 10:14:12 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.211 10:14:12 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.211 10:14:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.211 10:14:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.211 10:14:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.211 10:14:12 -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.211 10:14:12 -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.211 10:14:12 -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.211 10:14:12 -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.211 10:14:12 -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.211 10:14:12 -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.211 10:14:12 -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.211 10:14:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.211 10:14:12 -- scripts/common.sh@344 -- # case "$op" in 00:08:18.211 10:14:12 -- scripts/common.sh@345 -- # : 1 00:08:18.211 10:14:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.211 10:14:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.211 10:14:12 -- scripts/common.sh@365 -- # decimal 1 00:08:18.211 10:14:12 -- scripts/common.sh@353 -- # local d=1 00:08:18.211 10:14:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.211 10:14:12 -- scripts/common.sh@355 -- # echo 1 00:08:18.211 10:14:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.211 10:14:12 -- scripts/common.sh@366 -- # decimal 2 00:08:18.211 10:14:12 -- scripts/common.sh@353 -- # local d=2 00:08:18.211 10:14:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.211 10:14:12 -- scripts/common.sh@355 -- # echo 2 00:08:18.211 10:14:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.211 10:14:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.211 10:14:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.211 10:14:12 -- scripts/common.sh@368 -- # return 0 00:08:18.211 10:14:12 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.212 10:14:12 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.212 --rc genhtml_branch_coverage=1 00:08:18.212 --rc genhtml_function_coverage=1 00:08:18.212 --rc genhtml_legend=1 00:08:18.212 --rc geninfo_all_blocks=1 00:08:18.212 --rc geninfo_unexecuted_blocks=1 00:08:18.212 00:08:18.212 ' 00:08:18.212 10:14:12 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.212 --rc genhtml_branch_coverage=1 00:08:18.212 --rc genhtml_function_coverage=1 00:08:18.212 --rc genhtml_legend=1 00:08:18.212 --rc geninfo_all_blocks=1 00:08:18.212 --rc geninfo_unexecuted_blocks=1 00:08:18.212 00:08:18.212 ' 00:08:18.212 10:14:12 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.212 --rc genhtml_branch_coverage=1 00:08:18.212 --rc genhtml_function_coverage=1 00:08:18.212 --rc genhtml_legend=1 00:08:18.212 --rc geninfo_all_blocks=1 00:08:18.212 --rc geninfo_unexecuted_blocks=1 00:08:18.212 00:08:18.212 ' 00:08:18.212 10:14:12 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.212 --rc genhtml_branch_coverage=1 00:08:18.212 --rc genhtml_function_coverage=1 00:08:18.212 --rc genhtml_legend=1 00:08:18.212 --rc geninfo_all_blocks=1 00:08:18.212 --rc geninfo_unexecuted_blocks=1 00:08:18.212 00:08:18.212 ' 00:08:18.212 10:14:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.212 10:14:12 -- nvmf/common.sh@7 -- # uname -s 00:08:18.212 10:14:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.212 10:14:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.212 10:14:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.212 10:14:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.212 10:14:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.212 10:14:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.212 10:14:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.212 10:14:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.212 10:14:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.212 10:14:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.470 10:14:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:28cd232b-d928-4e5c-ad06-351eb2523405 00:08:18.470 10:14:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=28cd232b-d928-4e5c-ad06-351eb2523405 00:08:18.470 10:14:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.470 10:14:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.470 10:14:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:18.470 10:14:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.470 10:14:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.470 10:14:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.470 10:14:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.470 10:14:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.470 10:14:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.470 10:14:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.470 10:14:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.470 10:14:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.470 10:14:12 -- paths/export.sh@5 -- # export PATH 00:08:18.470 10:14:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.470 10:14:12 -- nvmf/common.sh@51 -- # : 0 00:08:18.470 10:14:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.470 10:14:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.470 10:14:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.470 10:14:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.470 10:14:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.470 10:14:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.470 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.470 10:14:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.470 10:14:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.470 10:14:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.470 10:14:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:18.470 10:14:12 -- spdk/autotest.sh@32 -- # uname -s 00:08:18.470 10:14:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:18.470 10:14:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:18.470 10:14:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:18.470 10:14:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:18.470 10:14:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:18.470 10:14:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:18.470 10:14:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:18.470 10:14:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:18.470 10:14:12 -- spdk/autotest.sh@48 -- # udevadm_pid=54941 00:08:18.470 10:14:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:18.470 10:14:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:18.470 10:14:12 -- pm/common@17 -- # local monitor 00:08:18.470 10:14:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:18.470 10:14:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:18.470 10:14:12 -- pm/common@21 -- # date +%s 00:08:18.470 10:14:12 -- pm/common@25 -- # sleep 1 00:08:18.470 10:14:12 -- pm/common@21 -- # date +%s 00:08:18.470 10:14:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732529652 00:08:18.470 10:14:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732529652 00:08:18.470 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732529652_collect-vmstat.pm.log 00:08:18.470 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732529652_collect-cpu-load.pm.log 00:08:19.404 10:14:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:19.404 10:14:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:19.404 10:14:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.404 10:14:13 -- common/autotest_common.sh@10 -- # set +x 00:08:19.405 10:14:13 -- spdk/autotest.sh@59 -- # create_test_list 00:08:19.405 10:14:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:19.405 10:14:13 -- common/autotest_common.sh@10 -- # set +x 00:08:19.405 10:14:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:19.405 10:14:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:19.405 10:14:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:19.405 10:14:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:19.405 10:14:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:19.405 10:14:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:19.405 10:14:13 -- common/autotest_common.sh@1457 -- # uname 00:08:19.405 10:14:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:19.405 10:14:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:19.405 10:14:13 -- common/autotest_common.sh@1477 -- # uname 00:08:19.405 10:14:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:19.405 10:14:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:19.405 10:14:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:19.663 lcov: LCOV version 1.15 00:08:19.663 10:14:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:34.546 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:34.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:49.417 10:14:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:49.417 10:14:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.417 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:08:49.417 10:14:43 -- spdk/autotest.sh@78 -- # rm -f 00:08:49.417 10:14:43 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:49.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:50.239 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:50.239 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:50.239 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:08:50.498 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:08:50.498 10:14:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:50.498 10:14:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:50.498 10:14:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:50.498 10:14:44 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:50.498 10:14:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:50.498 10:14:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:50.498 10:14:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:50.498 10:14:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:08:50.498 10:14:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:08:50.498 10:14:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:50.498 10:14:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:08:50.498 10:14:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:08:50.498 10:14:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:50.498 10:14:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:50.498 10:14:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:50.498 10:14:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:08:50.498 10:14:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:50.498 10:14:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:50.498 10:14:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:50.498 10:14:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:50.498 10:14:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:50.498 10:14:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:50.498 10:14:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:50.498 10:14:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:50.498 No valid GPT data, bailing 00:08:50.498 10:14:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:50.498 10:14:44 -- scripts/common.sh@394 -- # pt= 00:08:50.498 10:14:44 -- scripts/common.sh@395 -- # return 1 00:08:50.498 10:14:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:50.498 1+0 records in 00:08:50.498 1+0 records out 00:08:50.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147797 s, 70.9 MB/s 00:08:50.498 10:14:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:50.498 10:14:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:50.498 10:14:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:50.498 10:14:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:50.498 10:14:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:50.498 No valid GPT data, bailing 00:08:50.498 10:14:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:50.498 10:14:44 -- scripts/common.sh@394 -- # pt= 00:08:50.498 10:14:44 -- scripts/common.sh@395 -- # return 1 00:08:50.498 10:14:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:50.498 1+0 records in 00:08:50.498 1+0 records out 00:08:50.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00417742 s, 251 MB/s 00:08:50.498 10:14:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:50.498 10:14:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:50.499 10:14:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:50.499 10:14:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:50.499 10:14:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:50.756 No valid GPT data, bailing 00:08:50.756 10:14:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:50.756 10:14:44 -- scripts/common.sh@394 -- # pt= 00:08:50.756 10:14:44 -- scripts/common.sh@395 -- # return 1 00:08:50.756 10:14:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:50.756 1+0 records in 00:08:50.756 1+0 records out 00:08:50.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046672 s, 225 MB/s 00:08:50.756 10:14:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:50.756 10:14:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:50.756 10:14:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:50.756 10:14:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:50.756 10:14:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:50.756 No valid GPT data, bailing 00:08:50.756 10:14:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:50.756 10:14:44 -- scripts/common.sh@394 -- # pt= 00:08:50.756 10:14:44 -- scripts/common.sh@395 -- # return 1 00:08:50.756 10:14:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:50.756 1+0 records in 00:08:50.756 1+0 records out 00:08:50.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00357272 s, 293 MB/s 00:08:50.756 10:14:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:50.756 10:14:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:50.756 10:14:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:08:50.756 10:14:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:08:50.756 10:14:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:08:50.756 No valid GPT data, bailing 00:08:50.756 10:14:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:08:50.757 10:14:45 -- scripts/common.sh@394 -- # pt= 00:08:50.757 10:14:45 -- scripts/common.sh@395 -- # return 1 00:08:50.757 10:14:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:08:50.757 1+0 records in 00:08:50.757 1+0 records out 00:08:50.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00417442 s, 251 MB/s 00:08:50.757 10:14:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:50.757 10:14:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:50.757 10:14:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:08:50.757 10:14:45 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:08:50.757 10:14:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:08:50.757 No valid GPT data, bailing 00:08:50.757 10:14:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:08:51.014 10:14:45 -- scripts/common.sh@394 -- # pt= 00:08:51.014 10:14:45 -- scripts/common.sh@395 -- # return 1 00:08:51.014 10:14:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:08:51.014 1+0 records in 00:08:51.014 1+0 records out 00:08:51.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472183 s, 222 MB/s 00:08:51.014 10:14:45 -- spdk/autotest.sh@105 -- # sync 00:08:51.014 10:14:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:51.014 10:14:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:51.014 10:14:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:52.915 10:14:47 -- spdk/autotest.sh@111 -- # uname -s 00:08:52.915 10:14:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:52.915 10:14:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:52.915 10:14:47 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:53.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.052 Hugepages 00:08:54.052 node hugesize free / total 00:08:54.052 node0 1048576kB 0 / 0 00:08:54.052 node0 2048kB 0 / 0 00:08:54.052 00:08:54.052 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:54.052 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:54.052 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:54.052 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:08:54.310 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:54.310 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:54.310 10:14:48 -- spdk/autotest.sh@117 -- # uname -s 00:08:54.310 10:14:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:54.310 10:14:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:54.310 10:14:48 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:54.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.455 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.455 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.455 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.455 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.713 10:14:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:56.647 10:14:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:56.647 10:14:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:56.647 10:14:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:56.647 10:14:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:56.647 10:14:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:56.647 10:14:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:56.647 10:14:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:56.647 10:14:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:56.647 10:14:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:56.647 10:14:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:56.647 10:14:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:56.647 10:14:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:57.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:57.213 Waiting for block devices as requested 00:08:57.471 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.471 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.471 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.730 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:03.010 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:03.010 10:14:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:03.010 10:14:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:03.010 10:14:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:03.010 10:14:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:09:03.010 10:14:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:03.010 10:14:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:03.010 10:14:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:03.010 10:14:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:09:03.010 10:14:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:09:03.010 10:14:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:09:03.010 10:14:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:03.010 10:14:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:09:03.010 10:14:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:03.010 10:14:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:03.010 10:14:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:03.010 10:14:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:03.010 10:14:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:09:03.010 10:14:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:03.010 10:14:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:03.010 10:14:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:03.010 10:14:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:03.010 10:14:56 -- common/autotest_common.sh@1543 -- # continue 00:09:03.010 10:14:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:03.010 10:14:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:03.010 10:14:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:03.010 10:14:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:09:03.010 10:14:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:03.010 10:14:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:03.010 10:14:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:03.010 10:14:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:03.010 10:14:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:03.010 10:14:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:03.010 10:14:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:03.010 10:14:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:03.010 10:14:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:03.010 10:14:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:03.010 10:14:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:03.010 10:14:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:03.010 10:14:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:03.010 10:14:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:03.010 10:14:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:03.010 10:14:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:03.010 10:14:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:03.010 10:14:56 -- common/autotest_common.sh@1543 -- # continue 00:09:03.010 10:14:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:03.010 10:14:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:09:03.010 10:14:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:03.010 10:14:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:09:03.010 10:14:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:03.010 10:14:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:09:03.010 10:14:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:03.010 10:14:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:09:03.011 10:14:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:09:03.011 10:14:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:09:03.011 10:14:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:09:03.011 10:14:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:03.011 10:14:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:03.011 10:14:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:03.011 10:14:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:03.011 10:14:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:03.011 10:14:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:09:03.011 10:14:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:03.011 10:14:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:03.011 10:14:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:03.011 10:14:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:03.011 10:14:57 -- common/autotest_common.sh@1543 -- # continue 00:09:03.011 10:14:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:03.011 10:14:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:09:03.011 10:14:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:03.011 10:14:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:09:03.011 10:14:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:03.011 10:14:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:09:03.011 10:14:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:03.011 10:14:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:09:03.011 10:14:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:09:03.011 10:14:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:09:03.011 10:14:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:09:03.011 10:14:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:03.011 10:14:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:03.011 10:14:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:03.011 10:14:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:03.011 10:14:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:03.011 10:14:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:09:03.011 10:14:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:03.011 10:14:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:03.011 10:14:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:03.011 10:14:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:03.011 10:14:57 -- common/autotest_common.sh@1543 -- # continue 00:09:03.011 10:14:57 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:03.011 10:14:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.011 10:14:57 -- common/autotest_common.sh@10 -- # set +x 00:09:03.011 10:14:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:03.011 10:14:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.011 10:14:57 -- common/autotest_common.sh@10 -- # set +x 00:09:03.011 10:14:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:03.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:04.143 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:04.143 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:04.143 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:04.143 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:04.143 10:14:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:04.143 10:14:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.143 10:14:58 -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 10:14:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:04.143 10:14:58 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:04.143 10:14:58 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:04.143 10:14:58 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:04.143 10:14:58 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:04.143 10:14:58 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:04.143 10:14:58 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:04.143 10:14:58 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:04.143 10:14:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:04.143 10:14:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:04.143 10:14:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:04.143 10:14:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:04.143 10:14:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:04.401 10:14:58 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:04.401 10:14:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:04.401 10:14:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:04.401 10:14:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:04.401 10:14:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:04.401 10:14:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:04.401 10:14:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:04.401 10:14:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:04.401 10:14:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:04.401 10:14:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:04.401 10:14:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:04.401 10:14:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:09:04.401 10:14:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:04.401 10:14:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:04.401 10:14:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:04.401 10:14:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:09:04.401 10:14:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:04.401 10:14:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:04.401 10:14:58 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:09:04.401 10:14:58 -- common/autotest_common.sh@1572 -- # return 0 00:09:04.401 10:14:58 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:09:04.401 10:14:58 -- common/autotest_common.sh@1580 -- # return 0 00:09:04.401 10:14:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:04.401 10:14:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:04.401 10:14:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:04.401 10:14:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:04.401 10:14:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:04.401 10:14:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.401 10:14:58 -- common/autotest_common.sh@10 -- # set +x 00:09:04.401 10:14:58 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:04.401 10:14:58 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:04.401 10:14:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.401 10:14:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.401 10:14:58 -- common/autotest_common.sh@10 -- # set +x 00:09:04.401 ************************************ 00:09:04.401 START TEST env 00:09:04.401 ************************************ 00:09:04.401 10:14:58 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:04.401 * Looking for test storage... 00:09:04.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:04.401 10:14:58 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:04.401 10:14:58 env -- common/autotest_common.sh@1693 -- # lcov --version 00:09:04.401 10:14:58 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:04.660 10:14:58 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:04.660 10:14:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.660 10:14:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.660 10:14:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.660 10:14:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.660 10:14:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.660 10:14:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.660 10:14:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.660 10:14:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.660 10:14:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.660 10:14:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.660 10:14:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.660 10:14:58 env -- scripts/common.sh@344 -- # case "$op" in 00:09:04.660 10:14:58 env -- scripts/common.sh@345 -- # : 1 00:09:04.660 10:14:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.660 10:14:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.660 10:14:58 env -- scripts/common.sh@365 -- # decimal 1 00:09:04.660 10:14:58 env -- scripts/common.sh@353 -- # local d=1 00:09:04.660 10:14:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.660 10:14:58 env -- scripts/common.sh@355 -- # echo 1 00:09:04.660 10:14:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.660 10:14:58 env -- scripts/common.sh@366 -- # decimal 2 00:09:04.660 10:14:58 env -- scripts/common.sh@353 -- # local d=2 00:09:04.660 10:14:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.660 10:14:58 env -- scripts/common.sh@355 -- # echo 2 00:09:04.660 10:14:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.660 10:14:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.660 10:14:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.660 10:14:58 env -- scripts/common.sh@368 -- # return 0 00:09:04.660 10:14:58 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.660 10:14:58 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.660 --rc genhtml_branch_coverage=1 00:09:04.660 --rc genhtml_function_coverage=1 00:09:04.660 --rc genhtml_legend=1 00:09:04.660 --rc geninfo_all_blocks=1 00:09:04.660 --rc geninfo_unexecuted_blocks=1 00:09:04.660 00:09:04.660 ' 00:09:04.660 10:14:58 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.660 --rc genhtml_branch_coverage=1 00:09:04.660 --rc genhtml_function_coverage=1 00:09:04.660 --rc genhtml_legend=1 00:09:04.660 --rc geninfo_all_blocks=1 00:09:04.660 --rc geninfo_unexecuted_blocks=1 00:09:04.660 00:09:04.660 ' 00:09:04.660 10:14:58 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.660 --rc genhtml_branch_coverage=1 00:09:04.660 --rc genhtml_function_coverage=1 00:09:04.660 --rc genhtml_legend=1 00:09:04.660 --rc geninfo_all_blocks=1 00:09:04.660 --rc geninfo_unexecuted_blocks=1 00:09:04.660 00:09:04.660 ' 00:09:04.660 10:14:58 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.660 --rc genhtml_branch_coverage=1 00:09:04.660 --rc genhtml_function_coverage=1 00:09:04.660 --rc genhtml_legend=1 00:09:04.660 --rc geninfo_all_blocks=1 00:09:04.660 --rc geninfo_unexecuted_blocks=1 00:09:04.660 00:09:04.660 ' 00:09:04.660 10:14:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:04.660 10:14:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.660 10:14:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.660 10:14:58 env -- common/autotest_common.sh@10 -- # set +x 00:09:04.660 ************************************ 00:09:04.660 START TEST env_memory 00:09:04.660 ************************************ 00:09:04.660 10:14:58 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:04.660 00:09:04.660 00:09:04.660 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.660 http://cunit.sourceforge.net/ 00:09:04.660 00:09:04.660 00:09:04.660 Suite: memory 00:09:04.660 Test: alloc and free memory map ...[2024-11-25 10:14:58.863004] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:04.660 passed 00:09:04.660 Test: mem map translation ...[2024-11-25 10:14:58.923287] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:04.660 [2024-11-25 10:14:58.923345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:04.660 [2024-11-25 10:14:58.923445] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:04.660 [2024-11-25 10:14:58.923478] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:04.919 passed 00:09:04.919 Test: mem map registration ...[2024-11-25 10:14:59.021447] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:04.919 [2024-11-25 10:14:59.021511] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:04.919 passed 00:09:04.919 Test: mem map adjacent registrations ...passed 00:09:04.919 00:09:04.919 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.919 suites 1 1 n/a 0 0 00:09:04.919 tests 4 4 4 0 0 00:09:04.919 asserts 152 152 152 0 n/a 00:09:04.919 00:09:04.919 Elapsed time = 0.342 seconds 00:09:04.919 00:09:04.919 real 0m0.381s 00:09:04.919 user 0m0.344s 00:09:04.919 sys 0m0.031s 00:09:04.919 10:14:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.919 10:14:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:04.919 ************************************ 00:09:04.919 END TEST env_memory 00:09:04.919 ************************************ 00:09:04.919 10:14:59 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:04.919 10:14:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.919 10:14:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.919 10:14:59 env -- common/autotest_common.sh@10 -- # set +x 00:09:04.919 ************************************ 00:09:04.919 START TEST env_vtophys 00:09:04.919 ************************************ 00:09:04.919 10:14:59 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:05.177 EAL: lib.eal log level changed from notice to debug 00:09:05.177 EAL: Detected lcore 0 as core 0 on socket 0 00:09:05.177 EAL: Detected lcore 1 as core 0 on socket 0 00:09:05.177 EAL: Detected lcore 2 as core 0 on socket 0 00:09:05.177 EAL: Detected lcore 3 as core 0 on socket 0 00:09:05.177 EAL: Detected lcore 4 as core 0 on socket 0 00:09:05.177 EAL: Detected lcore 5 as core 0 on socket 0 00:09:05.177 EAL: Detected lcore 6 as core 0 on socket 0 00:09:05.177 EAL: Detected lcore 7 as core 0 on socket 0 00:09:05.177 EAL: Detected lcore 8 as core 0 on socket 0 00:09:05.177 EAL: Detected lcore 9 as core 0 on socket 0 00:09:05.177 EAL: Maximum logical cores by configuration: 128 00:09:05.177 EAL: Detected CPU lcores: 10 00:09:05.177 EAL: Detected NUMA nodes: 1 00:09:05.177 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:05.177 EAL: Detected shared linkage of DPDK 00:09:05.177 EAL: No shared files mode enabled, IPC will be disabled 00:09:05.177 EAL: Selected IOVA mode 'PA' 00:09:05.177 EAL: Probing VFIO support... 00:09:05.177 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:05.177 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:05.177 EAL: Ask a virtual area of 0x2e000 bytes 00:09:05.177 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:05.177 EAL: Setting up physically contiguous memory... 00:09:05.177 EAL: Setting maximum number of open files to 524288 00:09:05.177 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:05.177 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:05.177 EAL: Ask a virtual area of 0x61000 bytes 00:09:05.177 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:05.177 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:05.177 EAL: Ask a virtual area of 0x400000000 bytes 00:09:05.177 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:05.177 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:05.177 EAL: Ask a virtual area of 0x61000 bytes 00:09:05.177 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:05.177 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:05.177 EAL: Ask a virtual area of 0x400000000 bytes 00:09:05.177 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:05.177 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:05.177 EAL: Ask a virtual area of 0x61000 bytes 00:09:05.177 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:05.177 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:05.177 EAL: Ask a virtual area of 0x400000000 bytes 00:09:05.177 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:05.177 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:05.177 EAL: Ask a virtual area of 0x61000 bytes 00:09:05.177 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:05.177 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:05.178 EAL: Ask a virtual area of 0x400000000 bytes 00:09:05.178 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:05.178 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:05.178 EAL: Hugepages will be freed exactly as allocated. 00:09:05.178 EAL: No shared files mode enabled, IPC is disabled 00:09:05.178 EAL: No shared files mode enabled, IPC is disabled 00:09:05.178 EAL: TSC frequency is ~2200000 KHz 00:09:05.178 EAL: Main lcore 0 is ready (tid=7f5cbb2ada40;cpuset=[0]) 00:09:05.178 EAL: Trying to obtain current memory policy. 00:09:05.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.178 EAL: Restoring previous memory policy: 0 00:09:05.178 EAL: request: mp_malloc_sync 00:09:05.178 EAL: No shared files mode enabled, IPC is disabled 00:09:05.178 EAL: Heap on socket 0 was expanded by 2MB 00:09:05.178 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:05.178 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:05.178 EAL: Mem event callback 'spdk:(nil)' registered 00:09:05.178 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:05.178 00:09:05.178 00:09:05.178 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.178 http://cunit.sourceforge.net/ 00:09:05.178 00:09:05.178 00:09:05.178 Suite: components_suite 00:09:05.744 Test: vtophys_malloc_test ...passed 00:09:05.744 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:05.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.744 EAL: Restoring previous memory policy: 4 00:09:05.744 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.744 EAL: request: mp_malloc_sync 00:09:05.744 EAL: No shared files mode enabled, IPC is disabled 00:09:05.744 EAL: Heap on socket 0 was expanded by 4MB 00:09:05.744 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.744 EAL: request: mp_malloc_sync 00:09:05.744 EAL: No shared files mode enabled, IPC is disabled 00:09:05.744 EAL: Heap on socket 0 was shrunk by 4MB 00:09:05.744 EAL: Trying to obtain current memory policy. 00:09:05.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.744 EAL: Restoring previous memory policy: 4 00:09:05.744 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.744 EAL: request: mp_malloc_sync 00:09:05.744 EAL: No shared files mode enabled, IPC is disabled 00:09:05.744 EAL: Heap on socket 0 was expanded by 6MB 00:09:05.744 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.744 EAL: request: mp_malloc_sync 00:09:05.744 EAL: No shared files mode enabled, IPC is disabled 00:09:05.744 EAL: Heap on socket 0 was shrunk by 6MB 00:09:05.744 EAL: Trying to obtain current memory policy. 00:09:05.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.744 EAL: Restoring previous memory policy: 4 00:09:05.744 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.744 EAL: request: mp_malloc_sync 00:09:05.744 EAL: No shared files mode enabled, IPC is disabled 00:09:05.744 EAL: Heap on socket 0 was expanded by 10MB 00:09:05.744 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.744 EAL: request: mp_malloc_sync 00:09:05.744 EAL: No shared files mode enabled, IPC is disabled 00:09:05.744 EAL: Heap on socket 0 was shrunk by 10MB 00:09:05.744 EAL: Trying to obtain current memory policy. 00:09:05.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.744 EAL: Restoring previous memory policy: 4 00:09:05.744 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.744 EAL: request: mp_malloc_sync 00:09:05.744 EAL: No shared files mode enabled, IPC is disabled 00:09:05.744 EAL: Heap on socket 0 was expanded by 18MB 00:09:05.745 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.745 EAL: request: mp_malloc_sync 00:09:05.745 EAL: No shared files mode enabled, IPC is disabled 00:09:05.745 EAL: Heap on socket 0 was shrunk by 18MB 00:09:05.745 EAL: Trying to obtain current memory policy. 00:09:05.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:05.745 EAL: Restoring previous memory policy: 4 00:09:05.745 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.745 EAL: request: mp_malloc_sync 00:09:05.745 EAL: No shared files mode enabled, IPC is disabled 00:09:05.745 EAL: Heap on socket 0 was expanded by 34MB 00:09:06.001 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.001 EAL: request: mp_malloc_sync 00:09:06.001 EAL: No shared files mode enabled, IPC is disabled 00:09:06.001 EAL: Heap on socket 0 was shrunk by 34MB 00:09:06.001 EAL: Trying to obtain current memory policy. 00:09:06.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:06.001 EAL: Restoring previous memory policy: 4 00:09:06.001 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.001 EAL: request: mp_malloc_sync 00:09:06.001 EAL: No shared files mode enabled, IPC is disabled 00:09:06.001 EAL: Heap on socket 0 was expanded by 66MB 00:09:06.001 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.001 EAL: request: mp_malloc_sync 00:09:06.001 EAL: No shared files mode enabled, IPC is disabled 00:09:06.001 EAL: Heap on socket 0 was shrunk by 66MB 00:09:06.278 EAL: Trying to obtain current memory policy. 00:09:06.278 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:06.278 EAL: Restoring previous memory policy: 4 00:09:06.278 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.278 EAL: request: mp_malloc_sync 00:09:06.278 EAL: No shared files mode enabled, IPC is disabled 00:09:06.278 EAL: Heap on socket 0 was expanded by 130MB 00:09:06.536 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.536 EAL: request: mp_malloc_sync 00:09:06.536 EAL: No shared files mode enabled, IPC is disabled 00:09:06.536 EAL: Heap on socket 0 was shrunk by 130MB 00:09:06.536 EAL: Trying to obtain current memory policy. 00:09:06.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:06.793 EAL: Restoring previous memory policy: 4 00:09:06.793 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.793 EAL: request: mp_malloc_sync 00:09:06.793 EAL: No shared files mode enabled, IPC is disabled 00:09:06.793 EAL: Heap on socket 0 was expanded by 258MB 00:09:07.052 EAL: Calling mem event callback 'spdk:(nil)' 00:09:07.313 EAL: request: mp_malloc_sync 00:09:07.313 EAL: No shared files mode enabled, IPC is disabled 00:09:07.313 EAL: Heap on socket 0 was shrunk by 258MB 00:09:07.571 EAL: Trying to obtain current memory policy. 00:09:07.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:07.571 EAL: Restoring previous memory policy: 4 00:09:07.571 EAL: Calling mem event callback 'spdk:(nil)' 00:09:07.571 EAL: request: mp_malloc_sync 00:09:07.571 EAL: No shared files mode enabled, IPC is disabled 00:09:07.571 EAL: Heap on socket 0 was expanded by 514MB 00:09:08.504 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.763 EAL: request: mp_malloc_sync 00:09:08.763 EAL: No shared files mode enabled, IPC is disabled 00:09:08.763 EAL: Heap on socket 0 was shrunk by 514MB 00:09:09.328 EAL: Trying to obtain current memory policy. 00:09:09.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.895 EAL: Restoring previous memory policy: 4 00:09:09.895 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.895 EAL: request: mp_malloc_sync 00:09:09.895 EAL: No shared files mode enabled, IPC is disabled 00:09:09.895 EAL: Heap on socket 0 was expanded by 1026MB 00:09:11.795 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.054 EAL: request: mp_malloc_sync 00:09:12.054 EAL: No shared files mode enabled, IPC is disabled 00:09:12.054 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:13.427 passed 00:09:13.427 00:09:13.427 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.427 suites 1 1 n/a 0 0 00:09:13.427 tests 2 2 2 0 0 00:09:13.427 asserts 5670 5670 5670 0 n/a 00:09:13.427 00:09:13.427 Elapsed time = 7.799 seconds 00:09:13.427 EAL: Calling mem event callback 'spdk:(nil)' 00:09:13.427 EAL: request: mp_malloc_sync 00:09:13.427 EAL: No shared files mode enabled, IPC is disabled 00:09:13.427 EAL: Heap on socket 0 was shrunk by 2MB 00:09:13.427 EAL: No shared files mode enabled, IPC is disabled 00:09:13.427 EAL: No shared files mode enabled, IPC is disabled 00:09:13.427 EAL: No shared files mode enabled, IPC is disabled 00:09:13.427 00:09:13.427 real 0m8.159s 00:09:13.427 user 0m6.753s 00:09:13.427 sys 0m1.229s 00:09:13.428 10:15:07 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.428 10:15:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:13.428 ************************************ 00:09:13.428 END TEST env_vtophys 00:09:13.428 ************************************ 00:09:13.428 10:15:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:13.428 10:15:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.428 10:15:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.428 10:15:07 env -- common/autotest_common.sh@10 -- # set +x 00:09:13.428 ************************************ 00:09:13.428 START TEST env_pci 00:09:13.428 ************************************ 00:09:13.428 10:15:07 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:13.428 00:09:13.428 00:09:13.428 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.428 http://cunit.sourceforge.net/ 00:09:13.428 00:09:13.428 00:09:13.428 Suite: pci 00:09:13.428 Test: pci_hook ...[2024-11-25 10:15:07.482151] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57754 has claimed it 00:09:13.428 passed 00:09:13.428 00:09:13.428 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.428 suites 1 1 n/a 0 0 00:09:13.428 tests 1 1 1 0 0 00:09:13.428 asserts 25 25 25 0 n/a 00:09:13.428 00:09:13.428 Elapsed time = 0.007 secondsEAL: Cannot find device (10000:00:01.0) 00:09:13.428 EAL: Failed to attach device on primary process 00:09:13.428 00:09:13.428 00:09:13.428 real 0m0.079s 00:09:13.428 user 0m0.043s 00:09:13.428 sys 0m0.034s 00:09:13.428 10:15:07 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.428 10:15:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:13.428 ************************************ 00:09:13.428 END TEST env_pci 00:09:13.428 ************************************ 00:09:13.428 10:15:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:13.428 10:15:07 env -- env/env.sh@15 -- # uname 00:09:13.428 10:15:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:13.428 10:15:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:13.428 10:15:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:13.428 10:15:07 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:13.428 10:15:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.428 10:15:07 env -- common/autotest_common.sh@10 -- # set +x 00:09:13.428 ************************************ 00:09:13.428 START TEST env_dpdk_post_init 00:09:13.428 ************************************ 00:09:13.428 10:15:07 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:13.428 EAL: Detected CPU lcores: 10 00:09:13.428 EAL: Detected NUMA nodes: 1 00:09:13.428 EAL: Detected shared linkage of DPDK 00:09:13.428 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:13.428 EAL: Selected IOVA mode 'PA' 00:09:13.686 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:13.686 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:13.686 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:13.686 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:09:13.686 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:09:13.686 Starting DPDK initialization... 00:09:13.686 Starting SPDK post initialization... 00:09:13.686 SPDK NVMe probe 00:09:13.686 Attaching to 0000:00:10.0 00:09:13.686 Attaching to 0000:00:11.0 00:09:13.686 Attaching to 0000:00:12.0 00:09:13.686 Attaching to 0000:00:13.0 00:09:13.686 Attached to 0000:00:10.0 00:09:13.686 Attached to 0000:00:11.0 00:09:13.686 Attached to 0000:00:13.0 00:09:13.686 Attached to 0000:00:12.0 00:09:13.686 Cleaning up... 00:09:13.686 00:09:13.686 real 0m0.317s 00:09:13.686 user 0m0.103s 00:09:13.686 sys 0m0.117s 00:09:13.686 10:15:07 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.686 10:15:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:13.686 ************************************ 00:09:13.686 END TEST env_dpdk_post_init 00:09:13.686 ************************************ 00:09:13.686 10:15:07 env -- env/env.sh@26 -- # uname 00:09:13.686 10:15:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:13.686 10:15:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:13.686 10:15:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.686 10:15:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.686 10:15:07 env -- common/autotest_common.sh@10 -- # set +x 00:09:13.686 ************************************ 00:09:13.686 START TEST env_mem_callbacks 00:09:13.686 ************************************ 00:09:13.686 10:15:07 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:13.686 EAL: Detected CPU lcores: 10 00:09:13.686 EAL: Detected NUMA nodes: 1 00:09:13.686 EAL: Detected shared linkage of DPDK 00:09:13.945 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:13.945 EAL: Selected IOVA mode 'PA' 00:09:13.945 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:13.945 00:09:13.945 00:09:13.945 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.945 http://cunit.sourceforge.net/ 00:09:13.945 00:09:13.945 00:09:13.945 Suite: memory 00:09:13.945 Test: test ... 00:09:13.945 register 0x200000200000 2097152 00:09:13.945 malloc 3145728 00:09:13.945 register 0x200000400000 4194304 00:09:13.945 buf 0x2000004fffc0 len 3145728 PASSED 00:09:13.945 malloc 64 00:09:13.945 buf 0x2000004ffec0 len 64 PASSED 00:09:13.945 malloc 4194304 00:09:13.945 register 0x200000800000 6291456 00:09:13.945 buf 0x2000009fffc0 len 4194304 PASSED 00:09:13.945 free 0x2000004fffc0 3145728 00:09:13.945 free 0x2000004ffec0 64 00:09:13.945 unregister 0x200000400000 4194304 PASSED 00:09:13.945 free 0x2000009fffc0 4194304 00:09:13.945 unregister 0x200000800000 6291456 PASSED 00:09:13.945 malloc 8388608 00:09:13.945 register 0x200000400000 10485760 00:09:13.945 buf 0x2000005fffc0 len 8388608 PASSED 00:09:13.945 free 0x2000005fffc0 8388608 00:09:13.945 unregister 0x200000400000 10485760 PASSED 00:09:13.945 passed 00:09:13.945 00:09:13.945 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.945 suites 1 1 n/a 0 0 00:09:13.945 tests 1 1 1 0 0 00:09:13.945 asserts 15 15 15 0 n/a 00:09:13.945 00:09:13.945 Elapsed time = 0.052 seconds 00:09:13.945 00:09:13.945 real 0m0.259s 00:09:13.945 user 0m0.084s 00:09:13.945 sys 0m0.073s 00:09:13.945 10:15:08 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.945 10:15:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:13.945 ************************************ 00:09:13.945 END TEST env_mem_callbacks 00:09:13.945 ************************************ 00:09:13.945 00:09:13.945 real 0m9.682s 00:09:13.945 user 0m7.523s 00:09:13.945 sys 0m1.754s 00:09:13.945 10:15:08 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.945 10:15:08 env -- common/autotest_common.sh@10 -- # set +x 00:09:13.945 ************************************ 00:09:13.945 END TEST env 00:09:13.945 ************************************ 00:09:14.204 10:15:08 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:14.204 10:15:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.204 10:15:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.204 10:15:08 -- common/autotest_common.sh@10 -- # set +x 00:09:14.204 ************************************ 00:09:14.204 START TEST rpc 00:09:14.204 ************************************ 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:14.204 * Looking for test storage... 00:09:14.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:14.204 10:15:08 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.204 10:15:08 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.204 10:15:08 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.204 10:15:08 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.204 10:15:08 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.204 10:15:08 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.204 10:15:08 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.204 10:15:08 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.204 10:15:08 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.204 10:15:08 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.204 10:15:08 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.204 10:15:08 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:14.204 10:15:08 rpc -- scripts/common.sh@345 -- # : 1 00:09:14.204 10:15:08 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.204 10:15:08 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.204 10:15:08 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:14.204 10:15:08 rpc -- scripts/common.sh@353 -- # local d=1 00:09:14.204 10:15:08 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.204 10:15:08 rpc -- scripts/common.sh@355 -- # echo 1 00:09:14.204 10:15:08 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.204 10:15:08 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:14.204 10:15:08 rpc -- scripts/common.sh@353 -- # local d=2 00:09:14.204 10:15:08 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.204 10:15:08 rpc -- scripts/common.sh@355 -- # echo 2 00:09:14.204 10:15:08 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.204 10:15:08 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.204 10:15:08 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.204 10:15:08 rpc -- scripts/common.sh@368 -- # return 0 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:14.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.204 --rc genhtml_branch_coverage=1 00:09:14.204 --rc genhtml_function_coverage=1 00:09:14.204 --rc genhtml_legend=1 00:09:14.204 --rc geninfo_all_blocks=1 00:09:14.204 --rc geninfo_unexecuted_blocks=1 00:09:14.204 00:09:14.204 ' 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:14.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.204 --rc genhtml_branch_coverage=1 00:09:14.204 --rc genhtml_function_coverage=1 00:09:14.204 --rc genhtml_legend=1 00:09:14.204 --rc geninfo_all_blocks=1 00:09:14.204 --rc geninfo_unexecuted_blocks=1 00:09:14.204 00:09:14.204 ' 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:14.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.204 --rc genhtml_branch_coverage=1 00:09:14.204 --rc genhtml_function_coverage=1 00:09:14.204 --rc genhtml_legend=1 00:09:14.204 --rc geninfo_all_blocks=1 00:09:14.204 --rc geninfo_unexecuted_blocks=1 00:09:14.204 00:09:14.204 ' 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:14.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.204 --rc genhtml_branch_coverage=1 00:09:14.204 --rc genhtml_function_coverage=1 00:09:14.204 --rc genhtml_legend=1 00:09:14.204 --rc geninfo_all_blocks=1 00:09:14.204 --rc geninfo_unexecuted_blocks=1 00:09:14.204 00:09:14.204 ' 00:09:14.204 10:15:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57881 00:09:14.204 10:15:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:14.204 10:15:08 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:14.204 10:15:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57881 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@835 -- # '[' -z 57881 ']' 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.204 10:15:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.462 [2024-11-25 10:15:08.634302] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:09:14.462 [2024-11-25 10:15:08.634562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57881 ] 00:09:14.721 [2024-11-25 10:15:08.819057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.721 [2024-11-25 10:15:08.941042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:14.721 [2024-11-25 10:15:08.941163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57881' to capture a snapshot of events at runtime. 00:09:14.721 [2024-11-25 10:15:08.941186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.721 [2024-11-25 10:15:08.941205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.721 [2024-11-25 10:15:08.941220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57881 for offline analysis/debug. 00:09:14.721 [2024-11-25 10:15:08.942540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.657 10:15:09 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.657 10:15:09 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:15.657 10:15:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:15.657 10:15:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:15.657 10:15:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:15.657 10:15:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:15.657 10:15:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.657 10:15:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.657 10:15:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.657 ************************************ 00:09:15.657 START TEST rpc_integrity 00:09:15.657 ************************************ 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:15.657 { 00:09:15.657 "name": "Malloc0", 00:09:15.657 "aliases": [ 00:09:15.657 "24a21158-6f89-4863-9b03-5f4c73fdd7c5" 00:09:15.657 ], 00:09:15.657 "product_name": "Malloc disk", 00:09:15.657 "block_size": 512, 00:09:15.657 "num_blocks": 16384, 00:09:15.657 "uuid": "24a21158-6f89-4863-9b03-5f4c73fdd7c5", 00:09:15.657 "assigned_rate_limits": { 00:09:15.657 "rw_ios_per_sec": 0, 00:09:15.657 "rw_mbytes_per_sec": 0, 00:09:15.657 "r_mbytes_per_sec": 0, 00:09:15.657 "w_mbytes_per_sec": 0 00:09:15.657 }, 00:09:15.657 "claimed": false, 00:09:15.657 "zoned": false, 00:09:15.657 "supported_io_types": { 00:09:15.657 "read": true, 00:09:15.657 "write": true, 00:09:15.657 "unmap": true, 00:09:15.657 "flush": true, 00:09:15.657 "reset": true, 00:09:15.657 "nvme_admin": false, 00:09:15.657 "nvme_io": false, 00:09:15.657 "nvme_io_md": false, 00:09:15.657 "write_zeroes": true, 00:09:15.657 "zcopy": true, 00:09:15.657 "get_zone_info": false, 00:09:15.657 "zone_management": false, 00:09:15.657 "zone_append": false, 00:09:15.657 "compare": false, 00:09:15.657 "compare_and_write": false, 00:09:15.657 "abort": true, 00:09:15.657 "seek_hole": false, 00:09:15.657 "seek_data": false, 00:09:15.657 "copy": true, 00:09:15.657 "nvme_iov_md": false 00:09:15.657 }, 00:09:15.657 "memory_domains": [ 00:09:15.657 { 00:09:15.657 "dma_device_id": "system", 00:09:15.657 "dma_device_type": 1 00:09:15.657 }, 00:09:15.657 { 00:09:15.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.657 "dma_device_type": 2 00:09:15.657 } 00:09:15.657 ], 00:09:15.657 "driver_specific": {} 00:09:15.657 } 00:09:15.657 ]' 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.657 [2024-11-25 10:15:09.979358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:15.657 [2024-11-25 10:15:09.979450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.657 [2024-11-25 10:15:09.979493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:15.657 [2024-11-25 10:15:09.979513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.657 [2024-11-25 10:15:09.982998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.657 [2024-11-25 10:15:09.983075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:15.657 Passthru0 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.657 10:15:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.657 10:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.916 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.916 10:15:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:15.916 { 00:09:15.916 "name": "Malloc0", 00:09:15.916 "aliases": [ 00:09:15.916 "24a21158-6f89-4863-9b03-5f4c73fdd7c5" 00:09:15.916 ], 00:09:15.916 "product_name": "Malloc disk", 00:09:15.916 "block_size": 512, 00:09:15.916 "num_blocks": 16384, 00:09:15.916 "uuid": "24a21158-6f89-4863-9b03-5f4c73fdd7c5", 00:09:15.916 "assigned_rate_limits": { 00:09:15.916 "rw_ios_per_sec": 0, 00:09:15.916 "rw_mbytes_per_sec": 0, 00:09:15.916 "r_mbytes_per_sec": 0, 00:09:15.916 "w_mbytes_per_sec": 0 00:09:15.916 }, 00:09:15.916 "claimed": true, 00:09:15.916 "claim_type": "exclusive_write", 00:09:15.916 "zoned": false, 00:09:15.916 "supported_io_types": { 00:09:15.916 "read": true, 00:09:15.916 "write": true, 00:09:15.916 "unmap": true, 00:09:15.916 "flush": true, 00:09:15.916 "reset": true, 00:09:15.916 "nvme_admin": false, 00:09:15.916 "nvme_io": false, 00:09:15.916 "nvme_io_md": false, 00:09:15.916 "write_zeroes": true, 00:09:15.916 "zcopy": true, 00:09:15.916 "get_zone_info": false, 00:09:15.916 "zone_management": false, 00:09:15.916 "zone_append": false, 00:09:15.916 "compare": false, 00:09:15.916 "compare_and_write": false, 00:09:15.916 "abort": true, 00:09:15.916 "seek_hole": false, 00:09:15.916 "seek_data": false, 00:09:15.916 "copy": true, 00:09:15.916 "nvme_iov_md": false 00:09:15.916 }, 00:09:15.916 "memory_domains": [ 00:09:15.916 { 00:09:15.916 "dma_device_id": "system", 00:09:15.916 "dma_device_type": 1 00:09:15.916 }, 00:09:15.916 { 00:09:15.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.916 "dma_device_type": 2 00:09:15.916 } 00:09:15.916 ], 00:09:15.916 "driver_specific": {} 00:09:15.916 }, 00:09:15.916 { 00:09:15.916 "name": "Passthru0", 00:09:15.916 "aliases": [ 00:09:15.916 "d4af75be-381c-56c8-a30f-7f24cd983bf5" 00:09:15.916 ], 00:09:15.916 "product_name": "passthru", 00:09:15.916 "block_size": 512, 00:09:15.916 "num_blocks": 16384, 00:09:15.916 "uuid": "d4af75be-381c-56c8-a30f-7f24cd983bf5", 00:09:15.916 "assigned_rate_limits": { 00:09:15.916 "rw_ios_per_sec": 0, 00:09:15.916 "rw_mbytes_per_sec": 0, 00:09:15.916 "r_mbytes_per_sec": 0, 00:09:15.916 "w_mbytes_per_sec": 0 00:09:15.916 }, 00:09:15.916 "claimed": false, 00:09:15.916 "zoned": false, 00:09:15.916 "supported_io_types": { 00:09:15.916 "read": true, 00:09:15.916 "write": true, 00:09:15.916 "unmap": true, 00:09:15.916 "flush": true, 00:09:15.916 "reset": true, 00:09:15.916 "nvme_admin": false, 00:09:15.916 "nvme_io": false, 00:09:15.916 "nvme_io_md": false, 00:09:15.916 "write_zeroes": true, 00:09:15.916 "zcopy": true, 00:09:15.916 "get_zone_info": false, 00:09:15.916 "zone_management": false, 00:09:15.916 "zone_append": false, 00:09:15.916 "compare": false, 00:09:15.916 "compare_and_write": false, 00:09:15.916 "abort": true, 00:09:15.916 "seek_hole": false, 00:09:15.916 "seek_data": false, 00:09:15.916 "copy": true, 00:09:15.916 "nvme_iov_md": false 00:09:15.916 }, 00:09:15.916 "memory_domains": [ 00:09:15.916 { 00:09:15.916 "dma_device_id": "system", 00:09:15.916 "dma_device_type": 1 00:09:15.916 }, 00:09:15.916 { 00:09:15.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.916 "dma_device_type": 2 00:09:15.916 } 00:09:15.916 ], 00:09:15.916 "driver_specific": { 00:09:15.916 "passthru": { 00:09:15.916 "name": "Passthru0", 00:09:15.916 "base_bdev_name": "Malloc0" 00:09:15.916 } 00:09:15.916 } 00:09:15.917 } 00:09:15.917 ]' 00:09:15.917 10:15:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:15.917 10:15:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:15.917 10:15:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 10:15:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 10:15:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 10:15:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:15.917 10:15:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:15.917 10:15:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:15.917 00:09:15.917 real 0m0.353s 00:09:15.917 user 0m0.218s 00:09:15.917 sys 0m0.038s 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.917 10:15:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 ************************************ 00:09:15.917 END TEST rpc_integrity 00:09:15.917 ************************************ 00:09:15.917 10:15:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:15.917 10:15:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.917 10:15:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.917 10:15:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 ************************************ 00:09:15.917 START TEST rpc_plugins 00:09:15.917 ************************************ 00:09:15.917 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:15.917 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:15.917 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:15.917 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:15.917 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:16.176 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.176 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:16.176 { 00:09:16.176 "name": "Malloc1", 00:09:16.176 "aliases": [ 00:09:16.176 "29fed6ed-be1e-4466-829e-8d2767289a24" 00:09:16.176 ], 00:09:16.176 "product_name": "Malloc disk", 00:09:16.176 "block_size": 4096, 00:09:16.176 "num_blocks": 256, 00:09:16.176 "uuid": "29fed6ed-be1e-4466-829e-8d2767289a24", 00:09:16.176 "assigned_rate_limits": { 00:09:16.176 "rw_ios_per_sec": 0, 00:09:16.176 "rw_mbytes_per_sec": 0, 00:09:16.176 "r_mbytes_per_sec": 0, 00:09:16.176 "w_mbytes_per_sec": 0 00:09:16.176 }, 00:09:16.176 "claimed": false, 00:09:16.176 "zoned": false, 00:09:16.176 "supported_io_types": { 00:09:16.176 "read": true, 00:09:16.176 "write": true, 00:09:16.176 "unmap": true, 00:09:16.176 "flush": true, 00:09:16.176 "reset": true, 00:09:16.176 "nvme_admin": false, 00:09:16.176 "nvme_io": false, 00:09:16.176 "nvme_io_md": false, 00:09:16.176 "write_zeroes": true, 00:09:16.176 "zcopy": true, 00:09:16.176 "get_zone_info": false, 00:09:16.176 "zone_management": false, 00:09:16.176 "zone_append": false, 00:09:16.176 "compare": false, 00:09:16.176 "compare_and_write": false, 00:09:16.176 "abort": true, 00:09:16.176 "seek_hole": false, 00:09:16.176 "seek_data": false, 00:09:16.176 "copy": true, 00:09:16.176 "nvme_iov_md": false 00:09:16.176 }, 00:09:16.176 "memory_domains": [ 00:09:16.176 { 00:09:16.176 "dma_device_id": "system", 00:09:16.176 "dma_device_type": 1 00:09:16.176 }, 00:09:16.176 { 00:09:16.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.176 "dma_device_type": 2 00:09:16.176 } 00:09:16.176 ], 00:09:16.176 "driver_specific": {} 00:09:16.176 } 00:09:16.176 ]' 00:09:16.176 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:16.176 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:16.176 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:16.176 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.176 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:16.176 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.176 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:16.176 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.176 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:16.176 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.176 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:16.176 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:16.176 10:15:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:16.176 00:09:16.176 real 0m0.174s 00:09:16.176 user 0m0.113s 00:09:16.176 sys 0m0.017s 00:09:16.176 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.176 10:15:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:16.176 ************************************ 00:09:16.176 END TEST rpc_plugins 00:09:16.176 ************************************ 00:09:16.176 10:15:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:16.176 10:15:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.176 10:15:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.176 10:15:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.176 ************************************ 00:09:16.176 START TEST rpc_trace_cmd_test 00:09:16.176 ************************************ 00:09:16.176 10:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:16.176 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:16.176 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:16.176 10:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.176 10:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.176 10:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.176 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:16.176 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57881", 00:09:16.176 "tpoint_group_mask": "0x8", 00:09:16.176 "iscsi_conn": { 00:09:16.176 "mask": "0x2", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "scsi": { 00:09:16.176 "mask": "0x4", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "bdev": { 00:09:16.176 "mask": "0x8", 00:09:16.176 "tpoint_mask": "0xffffffffffffffff" 00:09:16.176 }, 00:09:16.176 "nvmf_rdma": { 00:09:16.176 "mask": "0x10", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "nvmf_tcp": { 00:09:16.176 "mask": "0x20", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "ftl": { 00:09:16.176 "mask": "0x40", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "blobfs": { 00:09:16.176 "mask": "0x80", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "dsa": { 00:09:16.176 "mask": "0x200", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "thread": { 00:09:16.176 "mask": "0x400", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "nvme_pcie": { 00:09:16.176 "mask": "0x800", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "iaa": { 00:09:16.176 "mask": "0x1000", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "nvme_tcp": { 00:09:16.176 "mask": "0x2000", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "bdev_nvme": { 00:09:16.176 "mask": "0x4000", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "sock": { 00:09:16.176 "mask": "0x8000", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "blob": { 00:09:16.176 "mask": "0x10000", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "bdev_raid": { 00:09:16.176 "mask": "0x20000", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 }, 00:09:16.176 "scheduler": { 00:09:16.176 "mask": "0x40000", 00:09:16.176 "tpoint_mask": "0x0" 00:09:16.176 } 00:09:16.176 }' 00:09:16.176 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:16.435 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:16.436 00:09:16.436 real 0m0.259s 00:09:16.436 user 0m0.221s 00:09:16.436 sys 0m0.027s 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.436 10:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.436 ************************************ 00:09:16.436 END TEST rpc_trace_cmd_test 00:09:16.436 ************************************ 00:09:16.436 10:15:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:16.436 10:15:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:16.436 10:15:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:16.436 10:15:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.436 10:15:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.436 10:15:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.695 ************************************ 00:09:16.695 START TEST rpc_daemon_integrity 00:09:16.695 ************************************ 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:16.696 { 00:09:16.696 "name": "Malloc2", 00:09:16.696 "aliases": [ 00:09:16.696 "42ceaadd-36e1-40ab-8bc9-7a23527f5b1f" 00:09:16.696 ], 00:09:16.696 "product_name": "Malloc disk", 00:09:16.696 "block_size": 512, 00:09:16.696 "num_blocks": 16384, 00:09:16.696 "uuid": "42ceaadd-36e1-40ab-8bc9-7a23527f5b1f", 00:09:16.696 "assigned_rate_limits": { 00:09:16.696 "rw_ios_per_sec": 0, 00:09:16.696 "rw_mbytes_per_sec": 0, 00:09:16.696 "r_mbytes_per_sec": 0, 00:09:16.696 "w_mbytes_per_sec": 0 00:09:16.696 }, 00:09:16.696 "claimed": false, 00:09:16.696 "zoned": false, 00:09:16.696 "supported_io_types": { 00:09:16.696 "read": true, 00:09:16.696 "write": true, 00:09:16.696 "unmap": true, 00:09:16.696 "flush": true, 00:09:16.696 "reset": true, 00:09:16.696 "nvme_admin": false, 00:09:16.696 "nvme_io": false, 00:09:16.696 "nvme_io_md": false, 00:09:16.696 "write_zeroes": true, 00:09:16.696 "zcopy": true, 00:09:16.696 "get_zone_info": false, 00:09:16.696 "zone_management": false, 00:09:16.696 "zone_append": false, 00:09:16.696 "compare": false, 00:09:16.696 "compare_and_write": false, 00:09:16.696 "abort": true, 00:09:16.696 "seek_hole": false, 00:09:16.696 "seek_data": false, 00:09:16.696 "copy": true, 00:09:16.696 "nvme_iov_md": false 00:09:16.696 }, 00:09:16.696 "memory_domains": [ 00:09:16.696 { 00:09:16.696 "dma_device_id": "system", 00:09:16.696 "dma_device_type": 1 00:09:16.696 }, 00:09:16.696 { 00:09:16.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.696 "dma_device_type": 2 00:09:16.696 } 00:09:16.696 ], 00:09:16.696 "driver_specific": {} 00:09:16.696 } 00:09:16.696 ]' 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 [2024-11-25 10:15:10.937224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:16.696 [2024-11-25 10:15:10.937306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.696 [2024-11-25 10:15:10.937338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:16.696 [2024-11-25 10:15:10.937378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.696 [2024-11-25 10:15:10.940573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.696 [2024-11-25 10:15:10.940621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:16.696 Passthru0 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:16.696 { 00:09:16.696 "name": "Malloc2", 00:09:16.696 "aliases": [ 00:09:16.696 "42ceaadd-36e1-40ab-8bc9-7a23527f5b1f" 00:09:16.696 ], 00:09:16.696 "product_name": "Malloc disk", 00:09:16.696 "block_size": 512, 00:09:16.696 "num_blocks": 16384, 00:09:16.696 "uuid": "42ceaadd-36e1-40ab-8bc9-7a23527f5b1f", 00:09:16.696 "assigned_rate_limits": { 00:09:16.696 "rw_ios_per_sec": 0, 00:09:16.696 "rw_mbytes_per_sec": 0, 00:09:16.696 "r_mbytes_per_sec": 0, 00:09:16.696 "w_mbytes_per_sec": 0 00:09:16.696 }, 00:09:16.696 "claimed": true, 00:09:16.696 "claim_type": "exclusive_write", 00:09:16.696 "zoned": false, 00:09:16.696 "supported_io_types": { 00:09:16.696 "read": true, 00:09:16.696 "write": true, 00:09:16.696 "unmap": true, 00:09:16.696 "flush": true, 00:09:16.696 "reset": true, 00:09:16.696 "nvme_admin": false, 00:09:16.696 "nvme_io": false, 00:09:16.696 "nvme_io_md": false, 00:09:16.696 "write_zeroes": true, 00:09:16.696 "zcopy": true, 00:09:16.696 "get_zone_info": false, 00:09:16.696 "zone_management": false, 00:09:16.696 "zone_append": false, 00:09:16.696 "compare": false, 00:09:16.696 "compare_and_write": false, 00:09:16.696 "abort": true, 00:09:16.696 "seek_hole": false, 00:09:16.696 "seek_data": false, 00:09:16.696 "copy": true, 00:09:16.696 "nvme_iov_md": false 00:09:16.696 }, 00:09:16.696 "memory_domains": [ 00:09:16.696 { 00:09:16.696 "dma_device_id": "system", 00:09:16.696 "dma_device_type": 1 00:09:16.696 }, 00:09:16.696 { 00:09:16.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.696 "dma_device_type": 2 00:09:16.696 } 00:09:16.696 ], 00:09:16.696 "driver_specific": {} 00:09:16.696 }, 00:09:16.696 { 00:09:16.696 "name": "Passthru0", 00:09:16.696 "aliases": [ 00:09:16.696 "4d6efabe-c5aa-5c61-9421-398775475140" 00:09:16.696 ], 00:09:16.696 "product_name": "passthru", 00:09:16.696 "block_size": 512, 00:09:16.696 "num_blocks": 16384, 00:09:16.696 "uuid": "4d6efabe-c5aa-5c61-9421-398775475140", 00:09:16.696 "assigned_rate_limits": { 00:09:16.696 "rw_ios_per_sec": 0, 00:09:16.696 "rw_mbytes_per_sec": 0, 00:09:16.696 "r_mbytes_per_sec": 0, 00:09:16.696 "w_mbytes_per_sec": 0 00:09:16.696 }, 00:09:16.696 "claimed": false, 00:09:16.696 "zoned": false, 00:09:16.696 "supported_io_types": { 00:09:16.696 "read": true, 00:09:16.696 "write": true, 00:09:16.696 "unmap": true, 00:09:16.696 "flush": true, 00:09:16.696 "reset": true, 00:09:16.696 "nvme_admin": false, 00:09:16.696 "nvme_io": false, 00:09:16.696 "nvme_io_md": false, 00:09:16.696 "write_zeroes": true, 00:09:16.696 "zcopy": true, 00:09:16.696 "get_zone_info": false, 00:09:16.696 "zone_management": false, 00:09:16.696 "zone_append": false, 00:09:16.696 "compare": false, 00:09:16.696 "compare_and_write": false, 00:09:16.696 "abort": true, 00:09:16.696 "seek_hole": false, 00:09:16.696 "seek_data": false, 00:09:16.696 "copy": true, 00:09:16.696 "nvme_iov_md": false 00:09:16.696 }, 00:09:16.696 "memory_domains": [ 00:09:16.696 { 00:09:16.696 "dma_device_id": "system", 00:09:16.696 "dma_device_type": 1 00:09:16.696 }, 00:09:16.696 { 00:09:16.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.696 "dma_device_type": 2 00:09:16.696 } 00:09:16.696 ], 00:09:16.696 "driver_specific": { 00:09:16.696 "passthru": { 00:09:16.696 "name": "Passthru0", 00:09:16.696 "base_bdev_name": "Malloc2" 00:09:16.696 } 00:09:16.696 } 00:09:16.696 } 00:09:16.696 ]' 00:09:16.696 10:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:16.696 10:15:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:16.696 10:15:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:16.696 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:16.958 00:09:16.958 real 0m0.356s 00:09:16.958 user 0m0.212s 00:09:16.958 sys 0m0.046s 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.958 10:15:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 ************************************ 00:09:16.958 END TEST rpc_daemon_integrity 00:09:16.958 ************************************ 00:09:16.958 10:15:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:16.958 10:15:11 rpc -- rpc/rpc.sh@84 -- # killprocess 57881 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@954 -- # '[' -z 57881 ']' 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@958 -- # kill -0 57881 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@959 -- # uname 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57881 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.958 killing process with pid 57881 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57881' 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@973 -- # kill 57881 00:09:16.958 10:15:11 rpc -- common/autotest_common.sh@978 -- # wait 57881 00:09:19.486 00:09:19.486 real 0m4.943s 00:09:19.486 user 0m5.509s 00:09:19.486 sys 0m0.961s 00:09:19.486 10:15:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.486 10:15:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.486 ************************************ 00:09:19.486 END TEST rpc 00:09:19.486 ************************************ 00:09:19.486 10:15:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:19.486 10:15:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.486 10:15:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.486 10:15:13 -- common/autotest_common.sh@10 -- # set +x 00:09:19.486 ************************************ 00:09:19.486 START TEST skip_rpc 00:09:19.486 ************************************ 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:19.486 * Looking for test storage... 00:09:19.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.486 10:15:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.486 --rc genhtml_branch_coverage=1 00:09:19.486 --rc genhtml_function_coverage=1 00:09:19.486 --rc genhtml_legend=1 00:09:19.486 --rc geninfo_all_blocks=1 00:09:19.486 --rc geninfo_unexecuted_blocks=1 00:09:19.486 00:09:19.486 ' 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.486 --rc genhtml_branch_coverage=1 00:09:19.486 --rc genhtml_function_coverage=1 00:09:19.486 --rc genhtml_legend=1 00:09:19.486 --rc geninfo_all_blocks=1 00:09:19.486 --rc geninfo_unexecuted_blocks=1 00:09:19.486 00:09:19.486 ' 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.486 --rc genhtml_branch_coverage=1 00:09:19.486 --rc genhtml_function_coverage=1 00:09:19.486 --rc genhtml_legend=1 00:09:19.486 --rc geninfo_all_blocks=1 00:09:19.486 --rc geninfo_unexecuted_blocks=1 00:09:19.486 00:09:19.486 ' 00:09:19.486 10:15:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.486 --rc genhtml_branch_coverage=1 00:09:19.486 --rc genhtml_function_coverage=1 00:09:19.486 --rc genhtml_legend=1 00:09:19.486 --rc geninfo_all_blocks=1 00:09:19.486 --rc geninfo_unexecuted_blocks=1 00:09:19.486 00:09:19.486 ' 00:09:19.486 10:15:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:19.486 10:15:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:19.487 10:15:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:19.487 10:15:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.487 10:15:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.487 10:15:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.487 ************************************ 00:09:19.487 START TEST skip_rpc 00:09:19.487 ************************************ 00:09:19.487 10:15:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:19.487 10:15:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58105 00:09:19.487 10:15:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:19.487 10:15:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:19.487 10:15:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:19.487 [2024-11-25 10:15:13.672696] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:09:19.487 [2024-11-25 10:15:13.672906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58105 ] 00:09:19.744 [2024-11-25 10:15:13.855254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.744 [2024-11-25 10:15:13.968698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58105 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58105 ']' 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58105 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.006 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58105 00:09:25.007 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.007 killing process with pid 58105 00:09:25.007 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.007 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58105' 00:09:25.007 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58105 00:09:25.007 10:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58105 00:09:26.903 00:09:26.903 real 0m7.322s 00:09:26.903 user 0m6.680s 00:09:26.903 sys 0m0.543s 00:09:26.903 10:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.903 10:15:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.903 ************************************ 00:09:26.903 END TEST skip_rpc 00:09:26.903 ************************************ 00:09:26.904 10:15:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:26.904 10:15:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.904 10:15:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.904 10:15:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.904 ************************************ 00:09:26.904 START TEST skip_rpc_with_json 00:09:26.904 ************************************ 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58213 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58213 00:09:26.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58213 ']' 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.904 10:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:26.904 [2024-11-25 10:15:21.067356] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:09:26.904 [2024-11-25 10:15:21.067564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58213 ] 00:09:27.162 [2024-11-25 10:15:21.255024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.162 [2024-11-25 10:15:21.394452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:28.096 [2024-11-25 10:15:22.269600] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:28.096 request: 00:09:28.096 { 00:09:28.096 "trtype": "tcp", 00:09:28.096 "method": "nvmf_get_transports", 00:09:28.096 "req_id": 1 00:09:28.096 } 00:09:28.096 Got JSON-RPC error response 00:09:28.096 response: 00:09:28.096 { 00:09:28.096 "code": -19, 00:09:28.096 "message": "No such device" 00:09:28.096 } 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:28.096 [2024-11-25 10:15:22.281735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.096 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:28.354 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.354 10:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:28.354 { 00:09:28.354 "subsystems": [ 00:09:28.354 { 00:09:28.354 "subsystem": "fsdev", 00:09:28.354 "config": [ 00:09:28.354 { 00:09:28.354 "method": "fsdev_set_opts", 00:09:28.354 "params": { 00:09:28.354 "fsdev_io_pool_size": 65535, 00:09:28.354 "fsdev_io_cache_size": 256 00:09:28.354 } 00:09:28.354 } 00:09:28.354 ] 00:09:28.354 }, 00:09:28.354 { 00:09:28.354 "subsystem": "keyring", 00:09:28.354 "config": [] 00:09:28.354 }, 00:09:28.354 { 00:09:28.354 "subsystem": "iobuf", 00:09:28.354 "config": [ 00:09:28.354 { 00:09:28.354 "method": "iobuf_set_options", 00:09:28.354 "params": { 00:09:28.354 "small_pool_count": 8192, 00:09:28.354 "large_pool_count": 1024, 00:09:28.354 "small_bufsize": 8192, 00:09:28.354 "large_bufsize": 135168, 00:09:28.354 "enable_numa": false 00:09:28.354 } 00:09:28.354 } 00:09:28.354 ] 00:09:28.354 }, 00:09:28.354 { 00:09:28.354 "subsystem": "sock", 00:09:28.354 "config": [ 00:09:28.354 { 00:09:28.354 "method": "sock_set_default_impl", 00:09:28.354 "params": { 00:09:28.354 "impl_name": "posix" 00:09:28.354 } 00:09:28.354 }, 00:09:28.354 { 00:09:28.354 "method": "sock_impl_set_options", 00:09:28.354 "params": { 00:09:28.354 "impl_name": "ssl", 00:09:28.354 "recv_buf_size": 4096, 00:09:28.354 "send_buf_size": 4096, 00:09:28.354 "enable_recv_pipe": true, 00:09:28.354 "enable_quickack": false, 00:09:28.354 "enable_placement_id": 0, 00:09:28.354 "enable_zerocopy_send_server": true, 00:09:28.354 "enable_zerocopy_send_client": false, 00:09:28.354 "zerocopy_threshold": 0, 00:09:28.354 "tls_version": 0, 00:09:28.354 "enable_ktls": false 00:09:28.354 } 00:09:28.354 }, 00:09:28.354 { 00:09:28.354 "method": "sock_impl_set_options", 00:09:28.354 "params": { 00:09:28.354 "impl_name": "posix", 00:09:28.354 "recv_buf_size": 2097152, 00:09:28.354 "send_buf_size": 2097152, 00:09:28.354 "enable_recv_pipe": true, 00:09:28.354 "enable_quickack": false, 00:09:28.354 "enable_placement_id": 0, 00:09:28.354 "enable_zerocopy_send_server": true, 00:09:28.354 "enable_zerocopy_send_client": false, 00:09:28.354 "zerocopy_threshold": 0, 00:09:28.354 "tls_version": 0, 00:09:28.354 "enable_ktls": false 00:09:28.354 } 00:09:28.354 } 00:09:28.354 ] 00:09:28.354 }, 00:09:28.354 { 00:09:28.354 "subsystem": "vmd", 00:09:28.354 "config": [] 00:09:28.354 }, 00:09:28.354 { 00:09:28.354 "subsystem": "accel", 00:09:28.354 "config": [ 00:09:28.354 { 00:09:28.354 "method": "accel_set_options", 00:09:28.354 "params": { 00:09:28.355 "small_cache_size": 128, 00:09:28.355 "large_cache_size": 16, 00:09:28.355 "task_count": 2048, 00:09:28.355 "sequence_count": 2048, 00:09:28.355 "buf_count": 2048 00:09:28.355 } 00:09:28.355 } 00:09:28.355 ] 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "subsystem": "bdev", 00:09:28.355 "config": [ 00:09:28.355 { 00:09:28.355 "method": "bdev_set_options", 00:09:28.355 "params": { 00:09:28.355 "bdev_io_pool_size": 65535, 00:09:28.355 "bdev_io_cache_size": 256, 00:09:28.355 "bdev_auto_examine": true, 00:09:28.355 "iobuf_small_cache_size": 128, 00:09:28.355 "iobuf_large_cache_size": 16 00:09:28.355 } 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "method": "bdev_raid_set_options", 00:09:28.355 "params": { 00:09:28.355 "process_window_size_kb": 1024, 00:09:28.355 "process_max_bandwidth_mb_sec": 0 00:09:28.355 } 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "method": "bdev_iscsi_set_options", 00:09:28.355 "params": { 00:09:28.355 "timeout_sec": 30 00:09:28.355 } 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "method": "bdev_nvme_set_options", 00:09:28.355 "params": { 00:09:28.355 "action_on_timeout": "none", 00:09:28.355 "timeout_us": 0, 00:09:28.355 "timeout_admin_us": 0, 00:09:28.355 "keep_alive_timeout_ms": 10000, 00:09:28.355 "arbitration_burst": 0, 00:09:28.355 "low_priority_weight": 0, 00:09:28.355 "medium_priority_weight": 0, 00:09:28.355 "high_priority_weight": 0, 00:09:28.355 "nvme_adminq_poll_period_us": 10000, 00:09:28.355 "nvme_ioq_poll_period_us": 0, 00:09:28.355 "io_queue_requests": 0, 00:09:28.355 "delay_cmd_submit": true, 00:09:28.355 "transport_retry_count": 4, 00:09:28.355 "bdev_retry_count": 3, 00:09:28.355 "transport_ack_timeout": 0, 00:09:28.355 "ctrlr_loss_timeout_sec": 0, 00:09:28.355 "reconnect_delay_sec": 0, 00:09:28.355 "fast_io_fail_timeout_sec": 0, 00:09:28.355 "disable_auto_failback": false, 00:09:28.355 "generate_uuids": false, 00:09:28.355 "transport_tos": 0, 00:09:28.355 "nvme_error_stat": false, 00:09:28.355 "rdma_srq_size": 0, 00:09:28.355 "io_path_stat": false, 00:09:28.355 "allow_accel_sequence": false, 00:09:28.355 "rdma_max_cq_size": 0, 00:09:28.355 "rdma_cm_event_timeout_ms": 0, 00:09:28.355 "dhchap_digests": [ 00:09:28.355 "sha256", 00:09:28.355 "sha384", 00:09:28.355 "sha512" 00:09:28.355 ], 00:09:28.355 "dhchap_dhgroups": [ 00:09:28.355 "null", 00:09:28.355 "ffdhe2048", 00:09:28.355 "ffdhe3072", 00:09:28.355 "ffdhe4096", 00:09:28.355 "ffdhe6144", 00:09:28.355 "ffdhe8192" 00:09:28.355 ] 00:09:28.355 } 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "method": "bdev_nvme_set_hotplug", 00:09:28.355 "params": { 00:09:28.355 "period_us": 100000, 00:09:28.355 "enable": false 00:09:28.355 } 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "method": "bdev_wait_for_examine" 00:09:28.355 } 00:09:28.355 ] 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "subsystem": "scsi", 00:09:28.355 "config": null 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "subsystem": "scheduler", 00:09:28.355 "config": [ 00:09:28.355 { 00:09:28.355 "method": "framework_set_scheduler", 00:09:28.355 "params": { 00:09:28.355 "name": "static" 00:09:28.355 } 00:09:28.355 } 00:09:28.355 ] 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "subsystem": "vhost_scsi", 00:09:28.355 "config": [] 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "subsystem": "vhost_blk", 00:09:28.355 "config": [] 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "subsystem": "ublk", 00:09:28.355 "config": [] 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "subsystem": "nbd", 00:09:28.355 "config": [] 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "subsystem": "nvmf", 00:09:28.355 "config": [ 00:09:28.355 { 00:09:28.355 "method": "nvmf_set_config", 00:09:28.355 "params": { 00:09:28.355 "discovery_filter": "match_any", 00:09:28.355 "admin_cmd_passthru": { 00:09:28.355 "identify_ctrlr": false 00:09:28.355 }, 00:09:28.355 "dhchap_digests": [ 00:09:28.355 "sha256", 00:09:28.355 "sha384", 00:09:28.355 "sha512" 00:09:28.355 ], 00:09:28.355 "dhchap_dhgroups": [ 00:09:28.355 "null", 00:09:28.355 "ffdhe2048", 00:09:28.355 "ffdhe3072", 00:09:28.355 "ffdhe4096", 00:09:28.355 "ffdhe6144", 00:09:28.355 "ffdhe8192" 00:09:28.355 ] 00:09:28.355 } 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "method": "nvmf_set_max_subsystems", 00:09:28.355 "params": { 00:09:28.355 "max_subsystems": 1024 00:09:28.355 } 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "method": "nvmf_set_crdt", 00:09:28.355 "params": { 00:09:28.355 "crdt1": 0, 00:09:28.355 "crdt2": 0, 00:09:28.355 "crdt3": 0 00:09:28.355 } 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "method": "nvmf_create_transport", 00:09:28.355 "params": { 00:09:28.355 "trtype": "TCP", 00:09:28.355 "max_queue_depth": 128, 00:09:28.355 "max_io_qpairs_per_ctrlr": 127, 00:09:28.355 "in_capsule_data_size": 4096, 00:09:28.355 "max_io_size": 131072, 00:09:28.355 "io_unit_size": 131072, 00:09:28.355 "max_aq_depth": 128, 00:09:28.355 "num_shared_buffers": 511, 00:09:28.355 "buf_cache_size": 4294967295, 00:09:28.355 "dif_insert_or_strip": false, 00:09:28.355 "zcopy": false, 00:09:28.355 "c2h_success": true, 00:09:28.355 "sock_priority": 0, 00:09:28.355 "abort_timeout_sec": 1, 00:09:28.355 "ack_timeout": 0, 00:09:28.355 "data_wr_pool_size": 0 00:09:28.355 } 00:09:28.355 } 00:09:28.355 ] 00:09:28.355 }, 00:09:28.355 { 00:09:28.355 "subsystem": "iscsi", 00:09:28.355 "config": [ 00:09:28.355 { 00:09:28.355 "method": "iscsi_set_options", 00:09:28.355 "params": { 00:09:28.355 "node_base": "iqn.2016-06.io.spdk", 00:09:28.355 "max_sessions": 128, 00:09:28.355 "max_connections_per_session": 2, 00:09:28.355 "max_queue_depth": 64, 00:09:28.355 "default_time2wait": 2, 00:09:28.355 "default_time2retain": 20, 00:09:28.355 "first_burst_length": 8192, 00:09:28.355 "immediate_data": true, 00:09:28.355 "allow_duplicated_isid": false, 00:09:28.355 "error_recovery_level": 0, 00:09:28.355 "nop_timeout": 60, 00:09:28.355 "nop_in_interval": 30, 00:09:28.355 "disable_chap": false, 00:09:28.355 "require_chap": false, 00:09:28.355 "mutual_chap": false, 00:09:28.355 "chap_group": 0, 00:09:28.355 "max_large_datain_per_connection": 64, 00:09:28.355 "max_r2t_per_connection": 4, 00:09:28.355 "pdu_pool_size": 36864, 00:09:28.355 "immediate_data_pool_size": 16384, 00:09:28.355 "data_out_pool_size": 2048 00:09:28.355 } 00:09:28.355 } 00:09:28.355 ] 00:09:28.355 } 00:09:28.355 ] 00:09:28.355 } 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58213 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58213 ']' 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58213 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58213 00:09:28.355 killing process with pid 58213 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58213' 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58213 00:09:28.355 10:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58213 00:09:30.932 10:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58265 00:09:30.932 10:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:30.932 10:15:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:36.191 10:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58265 00:09:36.191 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58265 ']' 00:09:36.191 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58265 00:09:36.191 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:36.191 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.191 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58265 00:09:36.191 killing process with pid 58265 00:09:36.191 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.191 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.191 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58265' 00:09:36.192 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58265 00:09:36.192 10:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58265 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:38.087 ************************************ 00:09:38.087 END TEST skip_rpc_with_json 00:09:38.087 ************************************ 00:09:38.087 00:09:38.087 real 0m11.379s 00:09:38.087 user 0m10.537s 00:09:38.087 sys 0m1.250s 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:38.087 10:15:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:38.087 10:15:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.087 10:15:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.087 10:15:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.087 ************************************ 00:09:38.087 START TEST skip_rpc_with_delay 00:09:38.087 ************************************ 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:38.087 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:38.345 [2024-11-25 10:15:32.505822] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:38.345 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:38.345 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.345 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:38.345 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.345 00:09:38.345 real 0m0.219s 00:09:38.345 user 0m0.111s 00:09:38.345 sys 0m0.105s 00:09:38.345 ************************************ 00:09:38.345 END TEST skip_rpc_with_delay 00:09:38.345 ************************************ 00:09:38.345 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.345 10:15:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:38.345 10:15:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:38.345 10:15:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:38.345 10:15:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:38.345 10:15:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.346 10:15:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.346 10:15:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.346 ************************************ 00:09:38.346 START TEST exit_on_failed_rpc_init 00:09:38.346 ************************************ 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:38.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58404 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58404 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58404 ']' 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.346 10:15:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:38.604 [2024-11-25 10:15:32.768098] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:09:38.604 [2024-11-25 10:15:32.768555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58404 ] 00:09:38.862 [2024-11-25 10:15:32.944026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.862 [2024-11-25 10:15:33.079539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:39.796 10:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:39.796 [2024-11-25 10:15:34.105101] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:09:39.796 [2024-11-25 10:15:34.105303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58422 ] 00:09:40.054 [2024-11-25 10:15:34.304713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.311 [2024-11-25 10:15:34.496633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.311 [2024-11-25 10:15:34.496804] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:40.311 [2024-11-25 10:15:34.496834] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:40.311 [2024-11-25 10:15:34.496859] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58404 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58404 ']' 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58404 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58404 00:09:40.570 killing process with pid 58404 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58404' 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58404 00:09:40.570 10:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58404 00:09:43.131 00:09:43.131 real 0m4.444s 00:09:43.131 user 0m4.927s 00:09:43.131 sys 0m0.742s 00:09:43.131 10:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.131 ************************************ 00:09:43.131 END TEST exit_on_failed_rpc_init 00:09:43.131 ************************************ 00:09:43.131 10:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:43.131 10:15:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:43.131 00:09:43.131 real 0m23.810s 00:09:43.131 user 0m22.458s 00:09:43.131 sys 0m2.864s 00:09:43.131 10:15:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.131 ************************************ 00:09:43.131 END TEST skip_rpc 00:09:43.131 ************************************ 00:09:43.131 10:15:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.131 10:15:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:43.131 10:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.131 10:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.131 10:15:37 -- common/autotest_common.sh@10 -- # set +x 00:09:43.131 ************************************ 00:09:43.131 START TEST rpc_client 00:09:43.131 ************************************ 00:09:43.131 10:15:37 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:43.131 * Looking for test storage... 00:09:43.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:43.131 10:15:37 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.131 10:15:37 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.131 10:15:37 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.131 10:15:37 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.131 10:15:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.131 10:15:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.131 10:15:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.131 10:15:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.131 10:15:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.131 10:15:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.131 10:15:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.131 10:15:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.131 10:15:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.132 10:15:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:43.132 10:15:37 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.132 10:15:37 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.132 --rc genhtml_branch_coverage=1 00:09:43.132 --rc genhtml_function_coverage=1 00:09:43.132 --rc genhtml_legend=1 00:09:43.132 --rc geninfo_all_blocks=1 00:09:43.132 --rc geninfo_unexecuted_blocks=1 00:09:43.132 00:09:43.132 ' 00:09:43.132 10:15:37 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.132 --rc genhtml_branch_coverage=1 00:09:43.132 --rc genhtml_function_coverage=1 00:09:43.132 --rc genhtml_legend=1 00:09:43.132 --rc geninfo_all_blocks=1 00:09:43.132 --rc geninfo_unexecuted_blocks=1 00:09:43.132 00:09:43.132 ' 00:09:43.132 10:15:37 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.132 --rc genhtml_branch_coverage=1 00:09:43.132 --rc genhtml_function_coverage=1 00:09:43.132 --rc genhtml_legend=1 00:09:43.132 --rc geninfo_all_blocks=1 00:09:43.132 --rc geninfo_unexecuted_blocks=1 00:09:43.132 00:09:43.132 ' 00:09:43.132 10:15:37 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.132 --rc genhtml_branch_coverage=1 00:09:43.132 --rc genhtml_function_coverage=1 00:09:43.132 --rc genhtml_legend=1 00:09:43.132 --rc geninfo_all_blocks=1 00:09:43.132 --rc geninfo_unexecuted_blocks=1 00:09:43.132 00:09:43.132 ' 00:09:43.132 10:15:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:43.132 OK 00:09:43.132 10:15:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:43.132 00:09:43.132 real 0m0.252s 00:09:43.132 user 0m0.143s 00:09:43.132 sys 0m0.116s 00:09:43.132 10:15:37 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.132 10:15:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:43.132 ************************************ 00:09:43.132 END TEST rpc_client 00:09:43.132 ************************************ 00:09:43.388 10:15:37 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:43.388 10:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.388 10:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.388 10:15:37 -- common/autotest_common.sh@10 -- # set +x 00:09:43.388 ************************************ 00:09:43.388 START TEST json_config 00:09:43.388 ************************************ 00:09:43.388 10:15:37 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:43.388 10:15:37 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.388 10:15:37 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.388 10:15:37 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.388 10:15:37 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.388 10:15:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.389 10:15:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.389 10:15:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.389 10:15:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.389 10:15:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.389 10:15:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.389 10:15:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.389 10:15:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.389 10:15:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.389 10:15:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.389 10:15:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.389 10:15:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:43.389 10:15:37 json_config -- scripts/common.sh@345 -- # : 1 00:09:43.389 10:15:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.389 10:15:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.389 10:15:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:43.389 10:15:37 json_config -- scripts/common.sh@353 -- # local d=1 00:09:43.389 10:15:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.389 10:15:37 json_config -- scripts/common.sh@355 -- # echo 1 00:09:43.389 10:15:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.389 10:15:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:43.389 10:15:37 json_config -- scripts/common.sh@353 -- # local d=2 00:09:43.389 10:15:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.389 10:15:37 json_config -- scripts/common.sh@355 -- # echo 2 00:09:43.389 10:15:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.389 10:15:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.389 10:15:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.389 10:15:37 json_config -- scripts/common.sh@368 -- # return 0 00:09:43.389 10:15:37 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.389 10:15:37 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.389 --rc genhtml_branch_coverage=1 00:09:43.389 --rc genhtml_function_coverage=1 00:09:43.389 --rc genhtml_legend=1 00:09:43.389 --rc geninfo_all_blocks=1 00:09:43.389 --rc geninfo_unexecuted_blocks=1 00:09:43.389 00:09:43.389 ' 00:09:43.389 10:15:37 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.389 --rc genhtml_branch_coverage=1 00:09:43.389 --rc genhtml_function_coverage=1 00:09:43.389 --rc genhtml_legend=1 00:09:43.389 --rc geninfo_all_blocks=1 00:09:43.389 --rc geninfo_unexecuted_blocks=1 00:09:43.389 00:09:43.389 ' 00:09:43.389 10:15:37 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.389 --rc genhtml_branch_coverage=1 00:09:43.389 --rc genhtml_function_coverage=1 00:09:43.389 --rc genhtml_legend=1 00:09:43.389 --rc geninfo_all_blocks=1 00:09:43.389 --rc geninfo_unexecuted_blocks=1 00:09:43.389 00:09:43.389 ' 00:09:43.389 10:15:37 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.389 --rc genhtml_branch_coverage=1 00:09:43.389 --rc genhtml_function_coverage=1 00:09:43.389 --rc genhtml_legend=1 00:09:43.389 --rc geninfo_all_blocks=1 00:09:43.389 --rc geninfo_unexecuted_blocks=1 00:09:43.389 00:09:43.389 ' 00:09:43.389 10:15:37 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:28cd232b-d928-4e5c-ad06-351eb2523405 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=28cd232b-d928-4e5c-ad06-351eb2523405 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.389 10:15:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.389 10:15:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.389 10:15:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.389 10:15:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.389 10:15:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.389 10:15:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.389 10:15:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.389 10:15:37 json_config -- paths/export.sh@5 -- # export PATH 00:09:43.389 10:15:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@51 -- # : 0 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.389 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.389 10:15:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.389 WARNING: No tests are enabled so not running JSON configuration tests 00:09:43.389 10:15:37 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:43.389 10:15:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:43.389 10:15:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:43.389 10:15:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:43.389 10:15:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:43.389 10:15:37 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:43.389 10:15:37 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:43.389 00:09:43.389 real 0m0.187s 00:09:43.389 user 0m0.114s 00:09:43.389 sys 0m0.072s 00:09:43.389 ************************************ 00:09:43.389 END TEST json_config 00:09:43.389 ************************************ 00:09:43.389 10:15:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.389 10:15:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:43.389 10:15:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:43.389 10:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.389 10:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.389 10:15:37 -- common/autotest_common.sh@10 -- # set +x 00:09:43.646 ************************************ 00:09:43.646 START TEST json_config_extra_key 00:09:43.646 ************************************ 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.646 10:15:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.646 --rc genhtml_branch_coverage=1 00:09:43.646 --rc genhtml_function_coverage=1 00:09:43.646 --rc genhtml_legend=1 00:09:43.646 --rc geninfo_all_blocks=1 00:09:43.646 --rc geninfo_unexecuted_blocks=1 00:09:43.646 00:09:43.646 ' 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.646 --rc genhtml_branch_coverage=1 00:09:43.646 --rc genhtml_function_coverage=1 00:09:43.646 --rc genhtml_legend=1 00:09:43.646 --rc geninfo_all_blocks=1 00:09:43.646 --rc geninfo_unexecuted_blocks=1 00:09:43.646 00:09:43.646 ' 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.646 --rc genhtml_branch_coverage=1 00:09:43.646 --rc genhtml_function_coverage=1 00:09:43.646 --rc genhtml_legend=1 00:09:43.646 --rc geninfo_all_blocks=1 00:09:43.646 --rc geninfo_unexecuted_blocks=1 00:09:43.646 00:09:43.646 ' 00:09:43.646 10:15:37 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.646 --rc genhtml_branch_coverage=1 00:09:43.646 --rc genhtml_function_coverage=1 00:09:43.646 --rc genhtml_legend=1 00:09:43.646 --rc geninfo_all_blocks=1 00:09:43.646 --rc geninfo_unexecuted_blocks=1 00:09:43.646 00:09:43.646 ' 00:09:43.646 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.646 10:15:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:43.646 10:15:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:28cd232b-d928-4e5c-ad06-351eb2523405 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=28cd232b-d928-4e5c-ad06-351eb2523405 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.647 10:15:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.647 10:15:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.647 10:15:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.647 10:15:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.647 10:15:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.647 10:15:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.647 10:15:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.647 10:15:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:43.647 10:15:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.647 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.647 10:15:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:43.647 INFO: launching applications... 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:43.647 10:15:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58632 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:43.647 Waiting for target to run... 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:43.647 10:15:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58632 /var/tmp/spdk_tgt.sock 00:09:43.647 10:15:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58632 ']' 00:09:43.647 10:15:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:43.647 10:15:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.647 10:15:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:43.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:43.647 10:15:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.647 10:15:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:43.905 [2024-11-25 10:15:38.060016] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:09:43.905 [2024-11-25 10:15:38.060451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58632 ] 00:09:44.470 [2024-11-25 10:15:38.532405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.470 [2024-11-25 10:15:38.684006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.404 00:09:45.404 INFO: shutting down applications... 00:09:45.404 10:15:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.404 10:15:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:45.404 10:15:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:45.404 10:15:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:45.404 10:15:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:45.404 10:15:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:45.404 10:15:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:45.404 10:15:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58632 ]] 00:09:45.404 10:15:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58632 00:09:45.404 10:15:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:45.404 10:15:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:45.404 10:15:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58632 00:09:45.404 10:15:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:45.662 10:15:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:45.662 10:15:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:45.662 10:15:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58632 00:09:45.662 10:15:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:46.228 10:15:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:46.228 10:15:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:46.228 10:15:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58632 00:09:46.228 10:15:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:46.794 10:15:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:46.794 10:15:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:46.794 10:15:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58632 00:09:46.794 10:15:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:47.359 10:15:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:47.359 10:15:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:47.359 10:15:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58632 00:09:47.359 10:15:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:47.617 10:15:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:47.617 10:15:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:47.617 10:15:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58632 00:09:47.617 10:15:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:48.182 10:15:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:48.182 10:15:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:48.182 10:15:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58632 00:09:48.182 SPDK target shutdown done 00:09:48.182 Success 00:09:48.182 10:15:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:48.182 10:15:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:48.182 10:15:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:48.182 10:15:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:48.182 10:15:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:48.182 00:09:48.182 real 0m4.731s 00:09:48.182 user 0m4.120s 00:09:48.182 sys 0m0.711s 00:09:48.182 10:15:42 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.182 10:15:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:48.182 ************************************ 00:09:48.182 END TEST json_config_extra_key 00:09:48.182 ************************************ 00:09:48.182 10:15:42 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:48.182 10:15:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.182 10:15:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.182 10:15:42 -- common/autotest_common.sh@10 -- # set +x 00:09:48.443 ************************************ 00:09:48.443 START TEST alias_rpc 00:09:48.443 ************************************ 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:48.443 * Looking for test storage... 00:09:48.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:48.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.443 10:15:42 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:48.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.443 --rc genhtml_branch_coverage=1 00:09:48.443 --rc genhtml_function_coverage=1 00:09:48.443 --rc genhtml_legend=1 00:09:48.443 --rc geninfo_all_blocks=1 00:09:48.443 --rc geninfo_unexecuted_blocks=1 00:09:48.443 00:09:48.443 ' 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:48.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.443 --rc genhtml_branch_coverage=1 00:09:48.443 --rc genhtml_function_coverage=1 00:09:48.443 --rc genhtml_legend=1 00:09:48.443 --rc geninfo_all_blocks=1 00:09:48.443 --rc geninfo_unexecuted_blocks=1 00:09:48.443 00:09:48.443 ' 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:48.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.443 --rc genhtml_branch_coverage=1 00:09:48.443 --rc genhtml_function_coverage=1 00:09:48.443 --rc genhtml_legend=1 00:09:48.443 --rc geninfo_all_blocks=1 00:09:48.443 --rc geninfo_unexecuted_blocks=1 00:09:48.443 00:09:48.443 ' 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:48.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.443 --rc genhtml_branch_coverage=1 00:09:48.443 --rc genhtml_function_coverage=1 00:09:48.443 --rc genhtml_legend=1 00:09:48.443 --rc geninfo_all_blocks=1 00:09:48.443 --rc geninfo_unexecuted_blocks=1 00:09:48.443 00:09:48.443 ' 00:09:48.443 10:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:48.443 10:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58738 00:09:48.443 10:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58738 00:09:48.443 10:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58738 ']' 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.443 10:15:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.705 [2024-11-25 10:15:42.844441] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:09:48.705 [2024-11-25 10:15:42.844873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58738 ] 00:09:48.964 [2024-11-25 10:15:43.040816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.964 [2024-11-25 10:15:43.200261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.898 10:15:44 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.898 10:15:44 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:49.898 10:15:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:50.157 10:15:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58738 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58738 ']' 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58738 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58738 00:09:50.157 killing process with pid 58738 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58738' 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@973 -- # kill 58738 00:09:50.157 10:15:44 alias_rpc -- common/autotest_common.sh@978 -- # wait 58738 00:09:52.684 ************************************ 00:09:52.684 END TEST alias_rpc 00:09:52.684 ************************************ 00:09:52.684 00:09:52.684 real 0m4.234s 00:09:52.684 user 0m4.368s 00:09:52.684 sys 0m0.686s 00:09:52.684 10:15:46 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.684 10:15:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.684 10:15:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:52.684 10:15:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:52.684 10:15:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.684 10:15:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.684 10:15:46 -- common/autotest_common.sh@10 -- # set +x 00:09:52.685 ************************************ 00:09:52.685 START TEST spdkcli_tcp 00:09:52.685 ************************************ 00:09:52.685 10:15:46 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:52.685 * Looking for test storage... 00:09:52.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:52.685 10:15:46 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.685 10:15:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.685 10:15:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.685 10:15:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.685 10:15:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.685 10:15:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.942 10:15:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:52.942 10:15:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.942 10:15:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.943 --rc genhtml_branch_coverage=1 00:09:52.943 --rc genhtml_function_coverage=1 00:09:52.943 --rc genhtml_legend=1 00:09:52.943 --rc geninfo_all_blocks=1 00:09:52.943 --rc geninfo_unexecuted_blocks=1 00:09:52.943 00:09:52.943 ' 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.943 --rc genhtml_branch_coverage=1 00:09:52.943 --rc genhtml_function_coverage=1 00:09:52.943 --rc genhtml_legend=1 00:09:52.943 --rc geninfo_all_blocks=1 00:09:52.943 --rc geninfo_unexecuted_blocks=1 00:09:52.943 00:09:52.943 ' 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.943 --rc genhtml_branch_coverage=1 00:09:52.943 --rc genhtml_function_coverage=1 00:09:52.943 --rc genhtml_legend=1 00:09:52.943 --rc geninfo_all_blocks=1 00:09:52.943 --rc geninfo_unexecuted_blocks=1 00:09:52.943 00:09:52.943 ' 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.943 --rc genhtml_branch_coverage=1 00:09:52.943 --rc genhtml_function_coverage=1 00:09:52.943 --rc genhtml_legend=1 00:09:52.943 --rc geninfo_all_blocks=1 00:09:52.943 --rc geninfo_unexecuted_blocks=1 00:09:52.943 00:09:52.943 ' 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58856 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58856 00:09:52.943 10:15:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58856 ']' 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.943 10:15:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.943 [2024-11-25 10:15:47.178484] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:09:52.943 [2024-11-25 10:15:47.178985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58856 ] 00:09:53.201 [2024-11-25 10:15:47.376576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.458 [2024-11-25 10:15:47.549328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.458 [2024-11-25 10:15:47.549343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.407 10:15:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.407 10:15:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:54.407 10:15:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58873 00:09:54.407 10:15:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:54.407 10:15:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:54.665 [ 00:09:54.665 "bdev_malloc_delete", 00:09:54.665 "bdev_malloc_create", 00:09:54.665 "bdev_null_resize", 00:09:54.665 "bdev_null_delete", 00:09:54.665 "bdev_null_create", 00:09:54.665 "bdev_nvme_cuse_unregister", 00:09:54.665 "bdev_nvme_cuse_register", 00:09:54.665 "bdev_opal_new_user", 00:09:54.665 "bdev_opal_set_lock_state", 00:09:54.665 "bdev_opal_delete", 00:09:54.665 "bdev_opal_get_info", 00:09:54.665 "bdev_opal_create", 00:09:54.665 "bdev_nvme_opal_revert", 00:09:54.665 "bdev_nvme_opal_init", 00:09:54.665 "bdev_nvme_send_cmd", 00:09:54.665 "bdev_nvme_set_keys", 00:09:54.665 "bdev_nvme_get_path_iostat", 00:09:54.665 "bdev_nvme_get_mdns_discovery_info", 00:09:54.665 "bdev_nvme_stop_mdns_discovery", 00:09:54.665 "bdev_nvme_start_mdns_discovery", 00:09:54.665 "bdev_nvme_set_multipath_policy", 00:09:54.665 "bdev_nvme_set_preferred_path", 00:09:54.665 "bdev_nvme_get_io_paths", 00:09:54.665 "bdev_nvme_remove_error_injection", 00:09:54.665 "bdev_nvme_add_error_injection", 00:09:54.665 "bdev_nvme_get_discovery_info", 00:09:54.665 "bdev_nvme_stop_discovery", 00:09:54.665 "bdev_nvme_start_discovery", 00:09:54.665 "bdev_nvme_get_controller_health_info", 00:09:54.665 "bdev_nvme_disable_controller", 00:09:54.665 "bdev_nvme_enable_controller", 00:09:54.665 "bdev_nvme_reset_controller", 00:09:54.665 "bdev_nvme_get_transport_statistics", 00:09:54.665 "bdev_nvme_apply_firmware", 00:09:54.665 "bdev_nvme_detach_controller", 00:09:54.665 "bdev_nvme_get_controllers", 00:09:54.665 "bdev_nvme_attach_controller", 00:09:54.665 "bdev_nvme_set_hotplug", 00:09:54.665 "bdev_nvme_set_options", 00:09:54.665 "bdev_passthru_delete", 00:09:54.665 "bdev_passthru_create", 00:09:54.665 "bdev_lvol_set_parent_bdev", 00:09:54.665 "bdev_lvol_set_parent", 00:09:54.665 "bdev_lvol_check_shallow_copy", 00:09:54.665 "bdev_lvol_start_shallow_copy", 00:09:54.665 "bdev_lvol_grow_lvstore", 00:09:54.665 "bdev_lvol_get_lvols", 00:09:54.665 "bdev_lvol_get_lvstores", 00:09:54.665 "bdev_lvol_delete", 00:09:54.665 "bdev_lvol_set_read_only", 00:09:54.665 "bdev_lvol_resize", 00:09:54.665 "bdev_lvol_decouple_parent", 00:09:54.665 "bdev_lvol_inflate", 00:09:54.665 "bdev_lvol_rename", 00:09:54.665 "bdev_lvol_clone_bdev", 00:09:54.665 "bdev_lvol_clone", 00:09:54.665 "bdev_lvol_snapshot", 00:09:54.665 "bdev_lvol_create", 00:09:54.665 "bdev_lvol_delete_lvstore", 00:09:54.665 "bdev_lvol_rename_lvstore", 00:09:54.665 "bdev_lvol_create_lvstore", 00:09:54.665 "bdev_raid_set_options", 00:09:54.665 "bdev_raid_remove_base_bdev", 00:09:54.665 "bdev_raid_add_base_bdev", 00:09:54.665 "bdev_raid_delete", 00:09:54.665 "bdev_raid_create", 00:09:54.665 "bdev_raid_get_bdevs", 00:09:54.665 "bdev_error_inject_error", 00:09:54.665 "bdev_error_delete", 00:09:54.665 "bdev_error_create", 00:09:54.665 "bdev_split_delete", 00:09:54.665 "bdev_split_create", 00:09:54.665 "bdev_delay_delete", 00:09:54.665 "bdev_delay_create", 00:09:54.666 "bdev_delay_update_latency", 00:09:54.666 "bdev_zone_block_delete", 00:09:54.666 "bdev_zone_block_create", 00:09:54.666 "blobfs_create", 00:09:54.666 "blobfs_detect", 00:09:54.666 "blobfs_set_cache_size", 00:09:54.666 "bdev_xnvme_delete", 00:09:54.666 "bdev_xnvme_create", 00:09:54.666 "bdev_aio_delete", 00:09:54.666 "bdev_aio_rescan", 00:09:54.666 "bdev_aio_create", 00:09:54.666 "bdev_ftl_set_property", 00:09:54.666 "bdev_ftl_get_properties", 00:09:54.666 "bdev_ftl_get_stats", 00:09:54.666 "bdev_ftl_unmap", 00:09:54.666 "bdev_ftl_unload", 00:09:54.666 "bdev_ftl_delete", 00:09:54.666 "bdev_ftl_load", 00:09:54.666 "bdev_ftl_create", 00:09:54.666 "bdev_virtio_attach_controller", 00:09:54.666 "bdev_virtio_scsi_get_devices", 00:09:54.666 "bdev_virtio_detach_controller", 00:09:54.666 "bdev_virtio_blk_set_hotplug", 00:09:54.666 "bdev_iscsi_delete", 00:09:54.666 "bdev_iscsi_create", 00:09:54.666 "bdev_iscsi_set_options", 00:09:54.666 "accel_error_inject_error", 00:09:54.666 "ioat_scan_accel_module", 00:09:54.666 "dsa_scan_accel_module", 00:09:54.666 "iaa_scan_accel_module", 00:09:54.666 "keyring_file_remove_key", 00:09:54.666 "keyring_file_add_key", 00:09:54.666 "keyring_linux_set_options", 00:09:54.666 "fsdev_aio_delete", 00:09:54.666 "fsdev_aio_create", 00:09:54.666 "iscsi_get_histogram", 00:09:54.666 "iscsi_enable_histogram", 00:09:54.666 "iscsi_set_options", 00:09:54.666 "iscsi_get_auth_groups", 00:09:54.666 "iscsi_auth_group_remove_secret", 00:09:54.666 "iscsi_auth_group_add_secret", 00:09:54.666 "iscsi_delete_auth_group", 00:09:54.666 "iscsi_create_auth_group", 00:09:54.666 "iscsi_set_discovery_auth", 00:09:54.666 "iscsi_get_options", 00:09:54.666 "iscsi_target_node_request_logout", 00:09:54.666 "iscsi_target_node_set_redirect", 00:09:54.666 "iscsi_target_node_set_auth", 00:09:54.666 "iscsi_target_node_add_lun", 00:09:54.666 "iscsi_get_stats", 00:09:54.666 "iscsi_get_connections", 00:09:54.666 "iscsi_portal_group_set_auth", 00:09:54.666 "iscsi_start_portal_group", 00:09:54.666 "iscsi_delete_portal_group", 00:09:54.666 "iscsi_create_portal_group", 00:09:54.666 "iscsi_get_portal_groups", 00:09:54.666 "iscsi_delete_target_node", 00:09:54.666 "iscsi_target_node_remove_pg_ig_maps", 00:09:54.666 "iscsi_target_node_add_pg_ig_maps", 00:09:54.666 "iscsi_create_target_node", 00:09:54.666 "iscsi_get_target_nodes", 00:09:54.666 "iscsi_delete_initiator_group", 00:09:54.666 "iscsi_initiator_group_remove_initiators", 00:09:54.666 "iscsi_initiator_group_add_initiators", 00:09:54.666 "iscsi_create_initiator_group", 00:09:54.666 "iscsi_get_initiator_groups", 00:09:54.666 "nvmf_set_crdt", 00:09:54.666 "nvmf_set_config", 00:09:54.666 "nvmf_set_max_subsystems", 00:09:54.666 "nvmf_stop_mdns_prr", 00:09:54.666 "nvmf_publish_mdns_prr", 00:09:54.666 "nvmf_subsystem_get_listeners", 00:09:54.666 "nvmf_subsystem_get_qpairs", 00:09:54.666 "nvmf_subsystem_get_controllers", 00:09:54.666 "nvmf_get_stats", 00:09:54.666 "nvmf_get_transports", 00:09:54.666 "nvmf_create_transport", 00:09:54.666 "nvmf_get_targets", 00:09:54.666 "nvmf_delete_target", 00:09:54.666 "nvmf_create_target", 00:09:54.666 "nvmf_subsystem_allow_any_host", 00:09:54.666 "nvmf_subsystem_set_keys", 00:09:54.666 "nvmf_subsystem_remove_host", 00:09:54.666 "nvmf_subsystem_add_host", 00:09:54.666 "nvmf_ns_remove_host", 00:09:54.666 "nvmf_ns_add_host", 00:09:54.666 "nvmf_subsystem_remove_ns", 00:09:54.666 "nvmf_subsystem_set_ns_ana_group", 00:09:54.666 "nvmf_subsystem_add_ns", 00:09:54.666 "nvmf_subsystem_listener_set_ana_state", 00:09:54.666 "nvmf_discovery_get_referrals", 00:09:54.666 "nvmf_discovery_remove_referral", 00:09:54.666 "nvmf_discovery_add_referral", 00:09:54.666 "nvmf_subsystem_remove_listener", 00:09:54.666 "nvmf_subsystem_add_listener", 00:09:54.666 "nvmf_delete_subsystem", 00:09:54.666 "nvmf_create_subsystem", 00:09:54.666 "nvmf_get_subsystems", 00:09:54.666 "env_dpdk_get_mem_stats", 00:09:54.666 "nbd_get_disks", 00:09:54.666 "nbd_stop_disk", 00:09:54.666 "nbd_start_disk", 00:09:54.666 "ublk_recover_disk", 00:09:54.666 "ublk_get_disks", 00:09:54.666 "ublk_stop_disk", 00:09:54.666 "ublk_start_disk", 00:09:54.666 "ublk_destroy_target", 00:09:54.666 "ublk_create_target", 00:09:54.666 "virtio_blk_create_transport", 00:09:54.666 "virtio_blk_get_transports", 00:09:54.666 "vhost_controller_set_coalescing", 00:09:54.666 "vhost_get_controllers", 00:09:54.666 "vhost_delete_controller", 00:09:54.666 "vhost_create_blk_controller", 00:09:54.666 "vhost_scsi_controller_remove_target", 00:09:54.666 "vhost_scsi_controller_add_target", 00:09:54.666 "vhost_start_scsi_controller", 00:09:54.666 "vhost_create_scsi_controller", 00:09:54.666 "thread_set_cpumask", 00:09:54.666 "scheduler_set_options", 00:09:54.666 "framework_get_governor", 00:09:54.666 "framework_get_scheduler", 00:09:54.666 "framework_set_scheduler", 00:09:54.666 "framework_get_reactors", 00:09:54.666 "thread_get_io_channels", 00:09:54.666 "thread_get_pollers", 00:09:54.666 "thread_get_stats", 00:09:54.666 "framework_monitor_context_switch", 00:09:54.666 "spdk_kill_instance", 00:09:54.666 "log_enable_timestamps", 00:09:54.666 "log_get_flags", 00:09:54.666 "log_clear_flag", 00:09:54.666 "log_set_flag", 00:09:54.666 "log_get_level", 00:09:54.666 "log_set_level", 00:09:54.666 "log_get_print_level", 00:09:54.666 "log_set_print_level", 00:09:54.666 "framework_enable_cpumask_locks", 00:09:54.666 "framework_disable_cpumask_locks", 00:09:54.666 "framework_wait_init", 00:09:54.666 "framework_start_init", 00:09:54.666 "scsi_get_devices", 00:09:54.666 "bdev_get_histogram", 00:09:54.666 "bdev_enable_histogram", 00:09:54.666 "bdev_set_qos_limit", 00:09:54.666 "bdev_set_qd_sampling_period", 00:09:54.666 "bdev_get_bdevs", 00:09:54.666 "bdev_reset_iostat", 00:09:54.666 "bdev_get_iostat", 00:09:54.666 "bdev_examine", 00:09:54.666 "bdev_wait_for_examine", 00:09:54.666 "bdev_set_options", 00:09:54.666 "accel_get_stats", 00:09:54.666 "accel_set_options", 00:09:54.666 "accel_set_driver", 00:09:54.666 "accel_crypto_key_destroy", 00:09:54.666 "accel_crypto_keys_get", 00:09:54.666 "accel_crypto_key_create", 00:09:54.666 "accel_assign_opc", 00:09:54.666 "accel_get_module_info", 00:09:54.666 "accel_get_opc_assignments", 00:09:54.666 "vmd_rescan", 00:09:54.666 "vmd_remove_device", 00:09:54.666 "vmd_enable", 00:09:54.666 "sock_get_default_impl", 00:09:54.666 "sock_set_default_impl", 00:09:54.666 "sock_impl_set_options", 00:09:54.666 "sock_impl_get_options", 00:09:54.666 "iobuf_get_stats", 00:09:54.666 "iobuf_set_options", 00:09:54.666 "keyring_get_keys", 00:09:54.666 "framework_get_pci_devices", 00:09:54.666 "framework_get_config", 00:09:54.666 "framework_get_subsystems", 00:09:54.666 "fsdev_set_opts", 00:09:54.666 "fsdev_get_opts", 00:09:54.666 "trace_get_info", 00:09:54.666 "trace_get_tpoint_group_mask", 00:09:54.666 "trace_disable_tpoint_group", 00:09:54.666 "trace_enable_tpoint_group", 00:09:54.666 "trace_clear_tpoint_mask", 00:09:54.666 "trace_set_tpoint_mask", 00:09:54.666 "notify_get_notifications", 00:09:54.666 "notify_get_types", 00:09:54.666 "spdk_get_version", 00:09:54.666 "rpc_get_methods" 00:09:54.666 ] 00:09:54.666 10:15:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:54.666 10:15:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.666 10:15:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:54.666 10:15:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:54.666 10:15:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58856 00:09:54.666 10:15:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58856 ']' 00:09:54.666 10:15:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58856 00:09:54.666 10:15:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:54.666 10:15:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.666 10:15:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58856 00:09:54.925 killing process with pid 58856 00:09:54.925 10:15:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.925 10:15:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.925 10:15:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58856' 00:09:54.925 10:15:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58856 00:09:54.925 10:15:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58856 00:09:57.454 ************************************ 00:09:57.454 END TEST spdkcli_tcp 00:09:57.454 ************************************ 00:09:57.454 00:09:57.454 real 0m4.670s 00:09:57.454 user 0m8.375s 00:09:57.454 sys 0m0.838s 00:09:57.454 10:15:51 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.454 10:15:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.454 10:15:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:57.454 10:15:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.454 10:15:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.454 10:15:51 -- common/autotest_common.sh@10 -- # set +x 00:09:57.454 ************************************ 00:09:57.454 START TEST dpdk_mem_utility 00:09:57.454 ************************************ 00:09:57.454 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:57.454 * Looking for test storage... 00:09:57.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:57.454 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.454 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.454 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.454 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:57.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.454 10:15:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:57.454 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.454 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.455 --rc genhtml_branch_coverage=1 00:09:57.455 --rc genhtml_function_coverage=1 00:09:57.455 --rc genhtml_legend=1 00:09:57.455 --rc geninfo_all_blocks=1 00:09:57.455 --rc geninfo_unexecuted_blocks=1 00:09:57.455 00:09:57.455 ' 00:09:57.455 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.455 --rc genhtml_branch_coverage=1 00:09:57.455 --rc genhtml_function_coverage=1 00:09:57.455 --rc genhtml_legend=1 00:09:57.455 --rc geninfo_all_blocks=1 00:09:57.455 --rc geninfo_unexecuted_blocks=1 00:09:57.455 00:09:57.455 ' 00:09:57.455 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.455 --rc genhtml_branch_coverage=1 00:09:57.455 --rc genhtml_function_coverage=1 00:09:57.455 --rc genhtml_legend=1 00:09:57.455 --rc geninfo_all_blocks=1 00:09:57.455 --rc geninfo_unexecuted_blocks=1 00:09:57.455 00:09:57.455 ' 00:09:57.455 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.455 --rc genhtml_branch_coverage=1 00:09:57.455 --rc genhtml_function_coverage=1 00:09:57.455 --rc genhtml_legend=1 00:09:57.455 --rc geninfo_all_blocks=1 00:09:57.455 --rc geninfo_unexecuted_blocks=1 00:09:57.455 00:09:57.455 ' 00:09:57.455 10:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:57.455 10:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58978 00:09:57.455 10:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58978 00:09:57.455 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58978 ']' 00:09:57.455 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.455 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.455 10:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:57.455 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.455 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.455 10:15:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:57.712 [2024-11-25 10:15:51.877087] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:09:57.712 [2024-11-25 10:15:51.877300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58978 ] 00:09:57.969 [2024-11-25 10:15:52.068480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.969 [2024-11-25 10:15:52.240511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.921 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.921 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:58.921 10:15:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:58.921 10:15:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:58.921 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.921 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:58.921 { 00:09:58.921 "filename": "/tmp/spdk_mem_dump.txt" 00:09:58.921 } 00:09:58.921 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.921 10:15:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:59.179 DPDK memory size 816.000000 MiB in 1 heap(s) 00:09:59.179 1 heaps totaling size 816.000000 MiB 00:09:59.179 size: 816.000000 MiB heap id: 0 00:09:59.179 end heaps---------- 00:09:59.179 9 mempools totaling size 595.772034 MiB 00:09:59.179 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:59.179 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:59.179 size: 92.545471 MiB name: bdev_io_58978 00:09:59.179 size: 50.003479 MiB name: msgpool_58978 00:09:59.179 size: 36.509338 MiB name: fsdev_io_58978 00:09:59.179 size: 21.763794 MiB name: PDU_Pool 00:09:59.179 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:59.179 size: 4.133484 MiB name: evtpool_58978 00:09:59.179 size: 0.026123 MiB name: Session_Pool 00:09:59.179 end mempools------- 00:09:59.179 6 memzones totaling size 4.142822 MiB 00:09:59.179 size: 1.000366 MiB name: RG_ring_0_58978 00:09:59.179 size: 1.000366 MiB name: RG_ring_1_58978 00:09:59.179 size: 1.000366 MiB name: RG_ring_4_58978 00:09:59.179 size: 1.000366 MiB name: RG_ring_5_58978 00:09:59.179 size: 0.125366 MiB name: RG_ring_2_58978 00:09:59.179 size: 0.015991 MiB name: RG_ring_3_58978 00:09:59.179 end memzones------- 00:09:59.179 10:15:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:59.179 heap id: 0 total size: 816.000000 MiB number of busy elements: 311 number of free elements: 18 00:09:59.179 list of free elements. size: 16.792358 MiB 00:09:59.179 element at address: 0x200006400000 with size: 1.995972 MiB 00:09:59.179 element at address: 0x20000a600000 with size: 1.995972 MiB 00:09:59.179 element at address: 0x200003e00000 with size: 1.991028 MiB 00:09:59.179 element at address: 0x200018d00040 with size: 0.999939 MiB 00:09:59.179 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:59.179 element at address: 0x200019200000 with size: 0.999084 MiB 00:09:59.179 element at address: 0x200031e00000 with size: 0.994324 MiB 00:09:59.179 element at address: 0x200000400000 with size: 0.992004 MiB 00:09:59.179 element at address: 0x200018a00000 with size: 0.959656 MiB 00:09:59.179 element at address: 0x200019500040 with size: 0.936401 MiB 00:09:59.179 element at address: 0x200000200000 with size: 0.716980 MiB 00:09:59.179 element at address: 0x20001ac00000 with size: 0.562683 MiB 00:09:59.179 element at address: 0x200000c00000 with size: 0.490173 MiB 00:09:59.179 element at address: 0x200018e00000 with size: 0.487976 MiB 00:09:59.179 element at address: 0x200019600000 with size: 0.485413 MiB 00:09:59.179 element at address: 0x200012c00000 with size: 0.443481 MiB 00:09:59.179 element at address: 0x200028000000 with size: 0.390442 MiB 00:09:59.179 element at address: 0x200000800000 with size: 0.350891 MiB 00:09:59.179 list of standard malloc elements. size: 199.286743 MiB 00:09:59.179 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:09:59.179 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:09:59.179 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:09:59.179 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:59.179 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:59.179 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:59.179 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:09:59.179 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:59.179 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:09:59.179 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:09:59.179 element at address: 0x200012bff040 with size: 0.000305 MiB 00:09:59.179 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:09:59.179 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200000cff000 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:09:59.179 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200012bff180 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200012bff280 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200012bff380 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200012bff480 with size: 0.000244 MiB 00:09:59.179 element at address: 0x200012bff580 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012bff680 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012bff780 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012bff880 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012bff980 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c71880 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c71980 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c72080 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012c72180 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:59.180 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:09:59.180 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200028063f40 with size: 0.000244 MiB 00:09:59.180 element at address: 0x200028064040 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806af80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b080 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b180 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b280 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b380 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b480 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b580 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b680 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b780 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b880 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806b980 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806be80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c080 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c180 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c280 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c380 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c480 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c580 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c680 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c780 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c880 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806c980 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d080 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d180 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d280 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d380 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d480 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d580 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d680 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d780 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d880 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806d980 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806da80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806db80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806de80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806df80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e080 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e180 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e280 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e380 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e480 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e580 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e680 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e780 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e880 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806e980 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f080 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f180 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f280 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f380 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f480 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f580 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f680 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f780 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f880 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806f980 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:09:59.180 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:09:59.180 list of memzone associated elements. size: 599.920898 MiB 00:09:59.180 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:09:59.180 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:59.180 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:09:59.180 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:59.180 element at address: 0x200012df4740 with size: 92.045105 MiB 00:09:59.180 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58978_0 00:09:59.180 element at address: 0x200000dff340 with size: 48.003113 MiB 00:09:59.180 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58978_0 00:09:59.180 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:09:59.180 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58978_0 00:09:59.180 element at address: 0x2000197be900 with size: 20.255615 MiB 00:09:59.180 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:59.180 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:09:59.180 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:59.180 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:09:59.180 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58978_0 00:09:59.180 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:09:59.180 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58978 00:09:59.180 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:59.180 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58978 00:09:59.180 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:59.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:59.180 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:09:59.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:59.180 element at address: 0x200018afde00 with size: 1.008179 MiB 00:09:59.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:59.180 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:09:59.180 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:59.180 element at address: 0x200000cff100 with size: 1.000549 MiB 00:09:59.180 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58978 00:09:59.180 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:09:59.180 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58978 00:09:59.180 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:09:59.180 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58978 00:09:59.180 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:09:59.180 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58978 00:09:59.180 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:09:59.180 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58978 00:09:59.180 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:09:59.180 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58978 00:09:59.180 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:09:59.180 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:59.180 element at address: 0x200012c72280 with size: 0.500549 MiB 00:09:59.180 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:59.180 element at address: 0x20001967c440 with size: 0.250549 MiB 00:09:59.180 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:59.180 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:09:59.180 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58978 00:09:59.180 element at address: 0x20000085df80 with size: 0.125549 MiB 00:09:59.180 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58978 00:09:59.180 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:09:59.180 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:59.180 element at address: 0x200028064140 with size: 0.023804 MiB 00:09:59.180 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:59.180 element at address: 0x200000859d40 with size: 0.016174 MiB 00:09:59.180 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58978 00:09:59.180 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:09:59.180 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:59.180 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:09:59.180 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58978 00:09:59.180 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:09:59.180 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58978 00:09:59.180 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:09:59.180 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58978 00:09:59.180 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:09:59.180 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:59.180 10:15:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:59.180 10:15:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58978 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58978 ']' 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58978 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58978 00:09:59.180 killing process with pid 58978 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58978' 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58978 00:09:59.180 10:15:53 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58978 00:10:01.708 00:10:01.708 real 0m4.391s 00:10:01.708 user 0m4.317s 00:10:01.708 sys 0m0.721s 00:10:01.708 10:15:55 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.708 10:15:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:01.708 ************************************ 00:10:01.708 END TEST dpdk_mem_utility 00:10:01.708 ************************************ 00:10:01.708 10:15:55 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:01.708 10:15:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.708 10:15:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.708 10:15:55 -- common/autotest_common.sh@10 -- # set +x 00:10:01.708 ************************************ 00:10:01.708 START TEST event 00:10:01.708 ************************************ 00:10:01.708 10:15:55 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:01.966 * Looking for test storage... 00:10:01.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:01.966 10:15:56 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.966 10:15:56 event -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.966 10:15:56 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.966 10:15:56 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.966 10:15:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.966 10:15:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.966 10:15:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.966 10:15:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.966 10:15:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.966 10:15:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.966 10:15:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.966 10:15:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.966 10:15:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.966 10:15:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.967 10:15:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.967 10:15:56 event -- scripts/common.sh@344 -- # case "$op" in 00:10:01.967 10:15:56 event -- scripts/common.sh@345 -- # : 1 00:10:01.967 10:15:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.967 10:15:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.967 10:15:56 event -- scripts/common.sh@365 -- # decimal 1 00:10:01.967 10:15:56 event -- scripts/common.sh@353 -- # local d=1 00:10:01.967 10:15:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.967 10:15:56 event -- scripts/common.sh@355 -- # echo 1 00:10:01.967 10:15:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.967 10:15:56 event -- scripts/common.sh@366 -- # decimal 2 00:10:01.967 10:15:56 event -- scripts/common.sh@353 -- # local d=2 00:10:01.967 10:15:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.967 10:15:56 event -- scripts/common.sh@355 -- # echo 2 00:10:01.967 10:15:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.967 10:15:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.967 10:15:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.967 10:15:56 event -- scripts/common.sh@368 -- # return 0 00:10:01.967 10:15:56 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.967 10:15:56 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.967 --rc genhtml_branch_coverage=1 00:10:01.967 --rc genhtml_function_coverage=1 00:10:01.967 --rc genhtml_legend=1 00:10:01.967 --rc geninfo_all_blocks=1 00:10:01.967 --rc geninfo_unexecuted_blocks=1 00:10:01.967 00:10:01.967 ' 00:10:01.967 10:15:56 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.967 --rc genhtml_branch_coverage=1 00:10:01.967 --rc genhtml_function_coverage=1 00:10:01.967 --rc genhtml_legend=1 00:10:01.967 --rc geninfo_all_blocks=1 00:10:01.967 --rc geninfo_unexecuted_blocks=1 00:10:01.967 00:10:01.967 ' 00:10:01.967 10:15:56 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.967 --rc genhtml_branch_coverage=1 00:10:01.967 --rc genhtml_function_coverage=1 00:10:01.967 --rc genhtml_legend=1 00:10:01.967 --rc geninfo_all_blocks=1 00:10:01.967 --rc geninfo_unexecuted_blocks=1 00:10:01.967 00:10:01.967 ' 00:10:01.967 10:15:56 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.967 --rc genhtml_branch_coverage=1 00:10:01.967 --rc genhtml_function_coverage=1 00:10:01.967 --rc genhtml_legend=1 00:10:01.967 --rc geninfo_all_blocks=1 00:10:01.967 --rc geninfo_unexecuted_blocks=1 00:10:01.967 00:10:01.967 ' 00:10:01.967 10:15:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:01.967 10:15:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:01.967 10:15:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:01.967 10:15:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:01.967 10:15:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.967 10:15:56 event -- common/autotest_common.sh@10 -- # set +x 00:10:01.967 ************************************ 00:10:01.967 START TEST event_perf 00:10:01.967 ************************************ 00:10:01.967 10:15:56 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:01.967 Running I/O for 1 seconds...[2024-11-25 10:15:56.240295] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:01.967 [2024-11-25 10:15:56.240596] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:10:02.224 [2024-11-25 10:15:56.432178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.482 [2024-11-25 10:15:56.659542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.482 [2024-11-25 10:15:56.659619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.482 [2024-11-25 10:15:56.659790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.482 Running I/O for 1 seconds...[2024-11-25 10:15:56.659806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.857 00:10:03.857 lcore 0: 140030 00:10:03.857 lcore 1: 140030 00:10:03.857 lcore 2: 140032 00:10:03.857 lcore 3: 140034 00:10:03.857 done. 00:10:03.857 00:10:03.857 real 0m1.760s 00:10:03.857 user 0m4.486s 00:10:03.857 sys 0m0.143s 00:10:03.857 10:15:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.857 10:15:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:03.857 ************************************ 00:10:03.857 END TEST event_perf 00:10:03.857 ************************************ 00:10:03.857 10:15:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:03.857 10:15:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:03.857 10:15:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.857 10:15:57 event -- common/autotest_common.sh@10 -- # set +x 00:10:03.857 ************************************ 00:10:03.857 START TEST event_reactor 00:10:03.857 ************************************ 00:10:03.857 10:15:58 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:03.857 [2024-11-25 10:15:58.051979] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:03.858 [2024-11-25 10:15:58.052347] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59131 ] 00:10:04.116 [2024-11-25 10:15:58.228230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.116 [2024-11-25 10:15:58.372586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.487 test_start 00:10:05.487 oneshot 00:10:05.487 tick 100 00:10:05.487 tick 100 00:10:05.487 tick 250 00:10:05.487 tick 100 00:10:05.487 tick 100 00:10:05.487 tick 250 00:10:05.487 tick 100 00:10:05.487 tick 500 00:10:05.487 tick 100 00:10:05.487 tick 100 00:10:05.487 tick 250 00:10:05.487 tick 100 00:10:05.487 tick 100 00:10:05.487 test_end 00:10:05.487 00:10:05.487 real 0m1.612s 00:10:05.487 user 0m1.394s 00:10:05.487 sys 0m0.106s 00:10:05.487 10:15:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.487 10:15:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:05.487 ************************************ 00:10:05.487 END TEST event_reactor 00:10:05.487 ************************************ 00:10:05.487 10:15:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:05.487 10:15:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:05.487 10:15:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.487 10:15:59 event -- common/autotest_common.sh@10 -- # set +x 00:10:05.487 ************************************ 00:10:05.487 START TEST event_reactor_perf 00:10:05.487 ************************************ 00:10:05.487 10:15:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:05.487 [2024-11-25 10:15:59.736038] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:05.487 [2024-11-25 10:15:59.736226] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:10:05.744 [2024-11-25 10:15:59.933627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.002 [2024-11-25 10:16:00.107969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.376 test_start 00:10:07.376 test_end 00:10:07.376 Performance: 273237 events per second 00:10:07.376 ************************************ 00:10:07.376 END TEST event_reactor_perf 00:10:07.376 ************************************ 00:10:07.376 00:10:07.376 real 0m1.660s 00:10:07.376 user 0m1.425s 00:10:07.376 sys 0m0.124s 00:10:07.376 10:16:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.376 10:16:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:07.376 10:16:01 event -- event/event.sh@49 -- # uname -s 00:10:07.376 10:16:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:07.376 10:16:01 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:07.376 10:16:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.376 10:16:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.376 10:16:01 event -- common/autotest_common.sh@10 -- # set +x 00:10:07.376 ************************************ 00:10:07.376 START TEST event_scheduler 00:10:07.376 ************************************ 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:07.376 * Looking for test storage... 00:10:07.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.376 10:16:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.376 --rc genhtml_branch_coverage=1 00:10:07.376 --rc genhtml_function_coverage=1 00:10:07.376 --rc genhtml_legend=1 00:10:07.376 --rc geninfo_all_blocks=1 00:10:07.376 --rc geninfo_unexecuted_blocks=1 00:10:07.376 00:10:07.376 ' 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.376 --rc genhtml_branch_coverage=1 00:10:07.376 --rc genhtml_function_coverage=1 00:10:07.376 --rc genhtml_legend=1 00:10:07.376 --rc geninfo_all_blocks=1 00:10:07.376 --rc geninfo_unexecuted_blocks=1 00:10:07.376 00:10:07.376 ' 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.376 --rc genhtml_branch_coverage=1 00:10:07.376 --rc genhtml_function_coverage=1 00:10:07.376 --rc genhtml_legend=1 00:10:07.376 --rc geninfo_all_blocks=1 00:10:07.376 --rc geninfo_unexecuted_blocks=1 00:10:07.376 00:10:07.376 ' 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.376 --rc genhtml_branch_coverage=1 00:10:07.376 --rc genhtml_function_coverage=1 00:10:07.376 --rc genhtml_legend=1 00:10:07.376 --rc geninfo_all_blocks=1 00:10:07.376 --rc geninfo_unexecuted_blocks=1 00:10:07.376 00:10:07.376 ' 00:10:07.376 10:16:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:07.376 10:16:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59249 00:10:07.376 10:16:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:07.376 10:16:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59249 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59249 ']' 00:10:07.376 10:16:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.376 10:16:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:07.635 [2024-11-25 10:16:01.713040] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:07.635 [2024-11-25 10:16:01.713523] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59249 ] 00:10:07.635 [2024-11-25 10:16:01.909752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.893 [2024-11-25 10:16:02.086923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.893 [2024-11-25 10:16:02.087081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.893 [2024-11-25 10:16:02.087231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.893 [2024-11-25 10:16:02.087478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.460 10:16:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.460 10:16:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:08.460 10:16:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:08.460 10:16:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.460 10:16:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:08.460 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:08.460 POWER: Cannot set governor of lcore 0 to userspace 00:10:08.460 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:08.460 POWER: Cannot set governor of lcore 0 to performance 00:10:08.460 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:08.460 POWER: Cannot set governor of lcore 0 to userspace 00:10:08.460 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:08.460 POWER: Cannot set governor of lcore 0 to userspace 00:10:08.460 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:08.460 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:08.460 POWER: Unable to set Power Management Environment for lcore 0 00:10:08.460 [2024-11-25 10:16:02.721636] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:10:08.460 [2024-11-25 10:16:02.721669] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:10:08.460 [2024-11-25 10:16:02.721684] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:08.460 [2024-11-25 10:16:02.721707] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:08.460 [2024-11-25 10:16:02.721719] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:08.460 [2024-11-25 10:16:02.721733] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:08.460 10:16:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.460 10:16:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:08.460 10:16:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.460 10:16:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:08.718 [2024-11-25 10:16:03.031183] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:08.718 10:16:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.718 10:16:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:08.718 10:16:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.718 10:16:03 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.718 10:16:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:08.718 ************************************ 00:10:08.718 START TEST scheduler_create_thread 00:10:08.718 ************************************ 00:10:08.718 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:08.718 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:08.718 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.718 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 2 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 3 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 4 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 5 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 6 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 7 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 8 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 9 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 10 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 10:16:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:10.347 10:16:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.347 10:16:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:10.347 10:16:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:10.347 10:16:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.347 10:16:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:11.720 ************************************ 00:10:11.720 END TEST scheduler_create_thread 00:10:11.720 ************************************ 00:10:11.720 10:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.720 00:10:11.720 real 0m2.619s 00:10:11.720 user 0m0.019s 00:10:11.720 sys 0m0.004s 00:10:11.720 10:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.720 10:16:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:11.720 10:16:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:11.720 10:16:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59249 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59249 ']' 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59249 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59249 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59249' 00:10:11.720 killing process with pid 59249 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59249 00:10:11.720 10:16:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59249 00:10:11.978 [2024-11-25 10:16:06.145008] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:12.911 00:10:12.911 real 0m5.793s 00:10:12.911 user 0m10.071s 00:10:12.911 sys 0m0.573s 00:10:12.911 10:16:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.911 10:16:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:12.911 ************************************ 00:10:12.911 END TEST event_scheduler 00:10:12.911 ************************************ 00:10:12.911 10:16:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:13.169 10:16:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:13.169 10:16:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.169 10:16:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.169 10:16:07 event -- common/autotest_common.sh@10 -- # set +x 00:10:13.169 ************************************ 00:10:13.169 START TEST app_repeat 00:10:13.169 ************************************ 00:10:13.169 10:16:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59355 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:13.169 Process app_repeat pid: 59355 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59355' 00:10:13.169 spdk_app_start Round 0 00:10:13.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:13.169 10:16:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59355 /var/tmp/spdk-nbd.sock 00:10:13.169 10:16:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59355 ']' 00:10:13.169 10:16:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:13.169 10:16:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.169 10:16:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:13.169 10:16:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.169 10:16:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:13.169 [2024-11-25 10:16:07.330717] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:13.169 [2024-11-25 10:16:07.331204] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59355 ] 00:10:13.427 [2024-11-25 10:16:07.519586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.427 [2024-11-25 10:16:07.672281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.427 [2024-11-25 10:16:07.672296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.364 10:16:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.364 10:16:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:14.364 10:16:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:14.364 Malloc0 00:10:14.364 10:16:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:14.929 Malloc1 00:10:14.929 10:16:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:14.929 10:16:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:15.186 /dev/nbd0 00:10:15.186 10:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:15.186 10:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:15.186 10:16:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:15.186 10:16:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:15.186 10:16:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:15.186 10:16:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:15.186 10:16:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:15.186 10:16:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:15.186 10:16:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:15.186 10:16:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:15.187 10:16:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:15.187 1+0 records in 00:10:15.187 1+0 records out 00:10:15.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294001 s, 13.9 MB/s 00:10:15.187 10:16:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:15.187 10:16:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:15.187 10:16:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:15.187 10:16:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:15.187 10:16:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:15.187 10:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:15.187 10:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:15.187 10:16:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:15.444 /dev/nbd1 00:10:15.444 10:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:15.444 10:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:15.444 10:16:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:15.444 10:16:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:15.444 10:16:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:15.444 10:16:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:15.444 10:16:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:15.444 10:16:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:15.445 10:16:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:15.445 10:16:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:15.445 10:16:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:15.445 1+0 records in 00:10:15.445 1+0 records out 00:10:15.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372994 s, 11.0 MB/s 00:10:15.445 10:16:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:15.445 10:16:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:15.445 10:16:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:15.445 10:16:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:15.445 10:16:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:15.445 10:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:15.445 10:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:15.445 10:16:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:15.445 10:16:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:15.445 10:16:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:15.703 10:16:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:15.703 { 00:10:15.703 "nbd_device": "/dev/nbd0", 00:10:15.703 "bdev_name": "Malloc0" 00:10:15.703 }, 00:10:15.703 { 00:10:15.703 "nbd_device": "/dev/nbd1", 00:10:15.703 "bdev_name": "Malloc1" 00:10:15.703 } 00:10:15.703 ]' 00:10:15.703 10:16:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:15.703 { 00:10:15.703 "nbd_device": "/dev/nbd0", 00:10:15.703 "bdev_name": "Malloc0" 00:10:15.703 }, 00:10:15.703 { 00:10:15.703 "nbd_device": "/dev/nbd1", 00:10:15.703 "bdev_name": "Malloc1" 00:10:15.703 } 00:10:15.703 ]' 00:10:15.703 10:16:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:15.960 /dev/nbd1' 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:15.960 /dev/nbd1' 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:15.960 256+0 records in 00:10:15.960 256+0 records out 00:10:15.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0087261 s, 120 MB/s 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:15.960 256+0 records in 00:10:15.960 256+0 records out 00:10:15.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295001 s, 35.5 MB/s 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:15.960 256+0 records in 00:10:15.960 256+0 records out 00:10:15.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0367208 s, 28.6 MB/s 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:15.960 10:16:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:16.218 10:16:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.476 10:16:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:17.096 10:16:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:17.096 10:16:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:17.365 10:16:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:18.741 [2024-11-25 10:16:12.824204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:18.741 [2024-11-25 10:16:12.965257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.741 [2024-11-25 10:16:12.965266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.999 [2024-11-25 10:16:13.177751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:18.999 [2024-11-25 10:16:13.177920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:20.376 spdk_app_start Round 1 00:10:20.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:20.376 10:16:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:20.376 10:16:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:20.376 10:16:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59355 /var/tmp/spdk-nbd.sock 00:10:20.376 10:16:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59355 ']' 00:10:20.376 10:16:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:20.376 10:16:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.376 10:16:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:20.376 10:16:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.376 10:16:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:20.634 10:16:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.634 10:16:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:20.634 10:16:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:21.212 Malloc0 00:10:21.212 10:16:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:21.521 Malloc1 00:10:21.521 10:16:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.521 10:16:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:21.780 /dev/nbd0 00:10:21.780 10:16:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:21.780 10:16:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:21.780 1+0 records in 00:10:21.780 1+0 records out 00:10:21.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546279 s, 7.5 MB/s 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.780 10:16:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:21.780 10:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:21.780 10:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.780 10:16:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:22.352 /dev/nbd1 00:10:22.352 10:16:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:22.352 10:16:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:22.352 1+0 records in 00:10:22.352 1+0 records out 00:10:22.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379167 s, 10.8 MB/s 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:22.352 10:16:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:22.352 10:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:22.352 10:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:22.352 10:16:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:22.352 10:16:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.352 10:16:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:22.611 { 00:10:22.611 "nbd_device": "/dev/nbd0", 00:10:22.611 "bdev_name": "Malloc0" 00:10:22.611 }, 00:10:22.611 { 00:10:22.611 "nbd_device": "/dev/nbd1", 00:10:22.611 "bdev_name": "Malloc1" 00:10:22.611 } 00:10:22.611 ]' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:22.611 { 00:10:22.611 "nbd_device": "/dev/nbd0", 00:10:22.611 "bdev_name": "Malloc0" 00:10:22.611 }, 00:10:22.611 { 00:10:22.611 "nbd_device": "/dev/nbd1", 00:10:22.611 "bdev_name": "Malloc1" 00:10:22.611 } 00:10:22.611 ]' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:22.611 /dev/nbd1' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:22.611 /dev/nbd1' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:22.611 256+0 records in 00:10:22.611 256+0 records out 00:10:22.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.009972 s, 105 MB/s 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:22.611 256+0 records in 00:10:22.611 256+0 records out 00:10:22.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312086 s, 33.6 MB/s 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:22.611 256+0 records in 00:10:22.611 256+0 records out 00:10:22.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0381049 s, 27.5 MB/s 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.611 10:16:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.869 10:16:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:23.126 10:16:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:23.693 10:16:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:23.693 10:16:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:24.259 10:16:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:25.191 [2024-11-25 10:16:19.435419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:25.448 [2024-11-25 10:16:19.575523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.448 [2024-11-25 10:16:19.575523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.706 [2024-11-25 10:16:19.801645] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:25.706 [2024-11-25 10:16:19.801768] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:27.078 spdk_app_start Round 2 00:10:27.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:27.078 10:16:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:27.078 10:16:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:27.078 10:16:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59355 /var/tmp/spdk-nbd.sock 00:10:27.078 10:16:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59355 ']' 00:10:27.078 10:16:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:27.078 10:16:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.078 10:16:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:27.078 10:16:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.078 10:16:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:27.338 10:16:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.338 10:16:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:27.338 10:16:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:27.904 Malloc0 00:10:27.904 10:16:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:28.162 Malloc1 00:10:28.162 10:16:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:28.162 10:16:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:28.419 /dev/nbd0 00:10:28.419 10:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:28.419 10:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:28.419 1+0 records in 00:10:28.419 1+0 records out 00:10:28.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313216 s, 13.1 MB/s 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.419 10:16:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:28.419 10:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.419 10:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:28.419 10:16:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:28.677 /dev/nbd1 00:10:28.677 10:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:28.677 10:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:28.677 1+0 records in 00:10:28.677 1+0 records out 00:10:28.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422416 s, 9.7 MB/s 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.677 10:16:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:28.677 10:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.677 10:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:28.677 10:16:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:28.677 10:16:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.677 10:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:28.934 10:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:28.934 { 00:10:28.934 "nbd_device": "/dev/nbd0", 00:10:28.934 "bdev_name": "Malloc0" 00:10:28.934 }, 00:10:28.934 { 00:10:28.934 "nbd_device": "/dev/nbd1", 00:10:28.934 "bdev_name": "Malloc1" 00:10:28.934 } 00:10:28.934 ]' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:29.191 { 00:10:29.191 "nbd_device": "/dev/nbd0", 00:10:29.191 "bdev_name": "Malloc0" 00:10:29.191 }, 00:10:29.191 { 00:10:29.191 "nbd_device": "/dev/nbd1", 00:10:29.191 "bdev_name": "Malloc1" 00:10:29.191 } 00:10:29.191 ]' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:29.191 /dev/nbd1' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:29.191 /dev/nbd1' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:29.191 256+0 records in 00:10:29.191 256+0 records out 00:10:29.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00843551 s, 124 MB/s 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:29.191 256+0 records in 00:10:29.191 256+0 records out 00:10:29.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0324872 s, 32.3 MB/s 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:29.191 256+0 records in 00:10:29.191 256+0 records out 00:10:29.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0389513 s, 26.9 MB/s 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.191 10:16:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:29.449 10:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:29.449 10:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:29.449 10:16:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:29.449 10:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.449 10:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.449 10:16:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:29.736 10:16:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:29.736 10:16:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.736 10:16:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.736 10:16:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.994 10:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:30.251 10:16:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:30.251 10:16:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:30.817 10:16:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:32.188 [2024-11-25 10:16:26.274026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:32.188 [2024-11-25 10:16:26.408986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.188 [2024-11-25 10:16:26.408997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.445 [2024-11-25 10:16:26.631909] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:32.445 [2024-11-25 10:16:26.631979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:33.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:33.823 10:16:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59355 /var/tmp/spdk-nbd.sock 00:10:33.823 10:16:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59355 ']' 00:10:33.823 10:16:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:33.823 10:16:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.823 10:16:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:33.823 10:16:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.823 10:16:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:34.080 10:16:28 event.app_repeat -- event/event.sh@39 -- # killprocess 59355 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59355 ']' 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59355 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59355 00:10:34.080 killing process with pid 59355 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59355' 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59355 00:10:34.080 10:16:28 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59355 00:10:35.512 spdk_app_start is called in Round 0. 00:10:35.512 Shutdown signal received, stop current app iteration 00:10:35.512 Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 reinitialization... 00:10:35.512 spdk_app_start is called in Round 1. 00:10:35.512 Shutdown signal received, stop current app iteration 00:10:35.512 Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 reinitialization... 00:10:35.512 spdk_app_start is called in Round 2. 00:10:35.512 Shutdown signal received, stop current app iteration 00:10:35.512 Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 reinitialization... 00:10:35.512 spdk_app_start is called in Round 3. 00:10:35.512 Shutdown signal received, stop current app iteration 00:10:35.512 10:16:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:35.512 10:16:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:35.512 00:10:35.512 real 0m22.220s 00:10:35.512 user 0m48.833s 00:10:35.512 sys 0m3.512s 00:10:35.512 10:16:29 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.512 10:16:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:35.512 ************************************ 00:10:35.512 END TEST app_repeat 00:10:35.512 ************************************ 00:10:35.512 10:16:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:35.512 10:16:29 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:35.512 10:16:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.512 10:16:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.512 10:16:29 event -- common/autotest_common.sh@10 -- # set +x 00:10:35.512 ************************************ 00:10:35.512 START TEST cpu_locks 00:10:35.512 ************************************ 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:35.512 * Looking for test storage... 00:10:35.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.512 10:16:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:35.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.512 --rc genhtml_branch_coverage=1 00:10:35.512 --rc genhtml_function_coverage=1 00:10:35.512 --rc genhtml_legend=1 00:10:35.512 --rc geninfo_all_blocks=1 00:10:35.512 --rc geninfo_unexecuted_blocks=1 00:10:35.512 00:10:35.512 ' 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:35.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.512 --rc genhtml_branch_coverage=1 00:10:35.512 --rc genhtml_function_coverage=1 00:10:35.512 --rc genhtml_legend=1 00:10:35.512 --rc geninfo_all_blocks=1 00:10:35.512 --rc geninfo_unexecuted_blocks=1 00:10:35.512 00:10:35.512 ' 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:35.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.512 --rc genhtml_branch_coverage=1 00:10:35.512 --rc genhtml_function_coverage=1 00:10:35.512 --rc genhtml_legend=1 00:10:35.512 --rc geninfo_all_blocks=1 00:10:35.512 --rc geninfo_unexecuted_blocks=1 00:10:35.512 00:10:35.512 ' 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:35.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.512 --rc genhtml_branch_coverage=1 00:10:35.512 --rc genhtml_function_coverage=1 00:10:35.512 --rc genhtml_legend=1 00:10:35.512 --rc geninfo_all_blocks=1 00:10:35.512 --rc geninfo_unexecuted_blocks=1 00:10:35.512 00:10:35.512 ' 00:10:35.512 10:16:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:35.512 10:16:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:35.512 10:16:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:35.512 10:16:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.512 10:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:35.512 ************************************ 00:10:35.512 START TEST default_locks 00:10:35.512 ************************************ 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59832 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59832 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59832 ']' 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.512 10:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:35.769 [2024-11-25 10:16:29.867646] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:35.769 [2024-11-25 10:16:29.867849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:10:35.769 [2024-11-25 10:16:30.059356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.025 [2024-11-25 10:16:30.218266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.959 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.959 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:36.959 10:16:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59832 00:10:36.959 10:16:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:36.959 10:16:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59832 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59832 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59832 ']' 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59832 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59832 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.522 killing process with pid 59832 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59832' 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59832 00:10:37.522 10:16:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59832 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59832 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59832 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59832 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59832 ']' 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.060 ERROR: process (pid: 59832) is no longer running 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59832) - No such process 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:40.060 00:10:40.060 real 0m4.261s 00:10:40.060 user 0m4.267s 00:10:40.060 sys 0m0.800s 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.060 ************************************ 00:10:40.060 END TEST default_locks 00:10:40.060 10:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.060 ************************************ 00:10:40.060 10:16:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:40.060 10:16:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.060 10:16:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.060 10:16:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.060 ************************************ 00:10:40.060 START TEST default_locks_via_rpc 00:10:40.060 ************************************ 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59917 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59917 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59917 ']' 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.060 10:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.060 [2024-11-25 10:16:34.213989] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:40.060 [2024-11-25 10:16:34.214177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59917 ] 00:10:40.316 [2024-11-25 10:16:34.396648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.316 [2024-11-25 10:16:34.562632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59917 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59917 00:10:41.249 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:41.507 10:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59917 00:10:41.507 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59917 ']' 00:10:41.507 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59917 00:10:41.507 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:41.507 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.765 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59917 00:10:41.765 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.765 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.765 killing process with pid 59917 00:10:41.765 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59917' 00:10:41.765 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59917 00:10:41.765 10:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59917 00:10:44.295 00:10:44.295 real 0m4.211s 00:10:44.295 user 0m4.251s 00:10:44.295 sys 0m0.772s 00:10:44.295 10:16:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.295 10:16:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.295 ************************************ 00:10:44.295 END TEST default_locks_via_rpc 00:10:44.295 ************************************ 00:10:44.295 10:16:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:44.295 10:16:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.295 10:16:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.295 10:16:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:44.295 ************************************ 00:10:44.295 START TEST non_locking_app_on_locked_coremask 00:10:44.295 ************************************ 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59994 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59994 /var/tmp/spdk.sock 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59994 ']' 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:44.295 10:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:44.295 [2024-11-25 10:16:38.427843] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:44.295 [2024-11-25 10:16:38.428007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59994 ] 00:10:44.295 [2024-11-25 10:16:38.605687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.593 [2024-11-25 10:16:38.777366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60010 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60010 /var/tmp/spdk2.sock 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60010 ']' 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.552 10:16:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:45.552 [2024-11-25 10:16:39.790501] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:45.552 [2024-11-25 10:16:39.791466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60010 ] 00:10:45.810 [2024-11-25 10:16:39.999693] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:45.810 [2024-11-25 10:16:39.999795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.068 [2024-11-25 10:16:40.263349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.596 10:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.596 10:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:48.596 10:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59994 00:10:48.596 10:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59994 00:10:48.596 10:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:49.162 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59994 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59994 ']' 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59994 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59994 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.163 killing process with pid 59994 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59994' 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59994 00:10:49.163 10:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59994 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60010 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60010 ']' 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60010 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60010 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60010' 00:10:54.430 killing process with pid 60010 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60010 00:10:54.430 10:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60010 00:10:56.344 ************************************ 00:10:56.344 END TEST non_locking_app_on_locked_coremask 00:10:56.344 ************************************ 00:10:56.344 00:10:56.344 real 0m12.285s 00:10:56.344 user 0m12.674s 00:10:56.344 sys 0m1.562s 00:10:56.344 10:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.344 10:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:56.344 10:16:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:56.344 10:16:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.344 10:16:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.344 10:16:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:56.344 ************************************ 00:10:56.344 START TEST locking_app_on_unlocked_coremask 00:10:56.344 ************************************ 00:10:56.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60165 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60165 /var/tmp/spdk.sock 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60165 ']' 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.344 10:16:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:56.603 [2024-11-25 10:16:50.805391] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:56.603 [2024-11-25 10:16:50.805579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60165 ] 00:10:56.860 [2024-11-25 10:16:50.984639] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:56.860 [2024-11-25 10:16:50.985141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.860 [2024-11-25 10:16:51.133056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60186 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60186 /var/tmp/spdk2.sock 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60186 ']' 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.791 10:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:58.049 [2024-11-25 10:16:52.221656] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:10:58.049 [2024-11-25 10:16:52.222426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60186 ] 00:10:58.307 [2024-11-25 10:16:52.421749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.564 [2024-11-25 10:16:52.725360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.094 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.094 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:01.094 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60186 00:11:01.094 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60186 00:11:01.094 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60165 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60165 ']' 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60165 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60165 00:11:01.669 killing process with pid 60165 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60165' 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60165 00:11:01.669 10:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60165 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60186 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60186 ']' 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60186 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60186 00:11:06.935 killing process with pid 60186 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60186' 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60186 00:11:06.935 10:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60186 00:11:09.467 ************************************ 00:11:09.467 END TEST locking_app_on_unlocked_coremask 00:11:09.467 ************************************ 00:11:09.467 00:11:09.467 real 0m12.590s 00:11:09.467 user 0m12.988s 00:11:09.467 sys 0m1.711s 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:09.467 10:17:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:09.467 10:17:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:09.467 10:17:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.467 10:17:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:09.467 ************************************ 00:11:09.467 START TEST locking_app_on_locked_coremask 00:11:09.467 ************************************ 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60342 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60342 /var/tmp/spdk.sock 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60342 ']' 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.467 10:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:09.467 [2024-11-25 10:17:03.475515] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:09.467 [2024-11-25 10:17:03.475705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60342 ] 00:11:09.467 [2024-11-25 10:17:03.662312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.724 [2024-11-25 10:17:03.815854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.658 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.658 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:10.658 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60369 00:11:10.658 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:10.658 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60369 /var/tmp/spdk2.sock 00:11:10.658 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:10.658 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60369 /var/tmp/spdk2.sock 00:11:10.658 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60369 /var/tmp/spdk2.sock 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60369 ']' 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:10.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.659 10:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:10.659 [2024-11-25 10:17:04.949949] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:10.659 [2024-11-25 10:17:04.950137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60369 ] 00:11:10.917 [2024-11-25 10:17:05.168229] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60342 has claimed it. 00:11:10.917 [2024-11-25 10:17:05.168375] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:11.483 ERROR: process (pid: 60369) is no longer running 00:11:11.483 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60369) - No such process 00:11:11.483 10:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.483 10:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:11.483 10:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:11.483 10:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:11.483 10:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:11.483 10:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:11.483 10:17:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60342 00:11:11.483 10:17:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:11.483 10:17:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60342 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60342 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60342 ']' 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60342 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60342 00:11:12.049 killing process with pid 60342 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60342' 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60342 00:11:12.049 10:17:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60342 00:11:14.621 ************************************ 00:11:14.621 END TEST locking_app_on_locked_coremask 00:11:14.621 ************************************ 00:11:14.621 00:11:14.621 real 0m5.260s 00:11:14.621 user 0m5.508s 00:11:14.621 sys 0m1.094s 00:11:14.621 10:17:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.621 10:17:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.621 10:17:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:14.621 10:17:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.621 10:17:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.621 10:17:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:14.621 ************************************ 00:11:14.621 START TEST locking_overlapped_coremask 00:11:14.621 ************************************ 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60433 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60433 /var/tmp/spdk.sock 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60433 ']' 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.621 10:17:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.621 [2024-11-25 10:17:08.744278] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:14.621 [2024-11-25 10:17:08.744842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60433 ] 00:11:14.621 [2024-11-25 10:17:08.926566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:14.879 [2024-11-25 10:17:09.090798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.879 [2024-11-25 10:17:09.090899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.879 [2024-11-25 10:17:09.090916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60462 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60462 /var/tmp/spdk2.sock 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60462 /var/tmp/spdk2.sock 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60462 /var/tmp/spdk2.sock 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60462 ']' 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.811 10:17:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:16.070 [2024-11-25 10:17:10.265022] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:16.070 [2024-11-25 10:17:10.265224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:11:16.328 [2024-11-25 10:17:10.465992] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60433 has claimed it. 00:11:16.328 [2024-11-25 10:17:10.466137] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:16.893 ERROR: process (pid: 60462) is no longer running 00:11:16.893 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60462) - No such process 00:11:16.893 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.893 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:16.893 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:16.893 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:16.893 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:16.893 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:16.893 10:17:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:16.893 10:17:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60433 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60433 ']' 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60433 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60433 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60433' 00:11:16.894 killing process with pid 60433 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60433 00:11:16.894 10:17:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60433 00:11:19.448 00:11:19.448 real 0m4.972s 00:11:19.448 user 0m13.562s 00:11:19.448 sys 0m0.869s 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:19.448 ************************************ 00:11:19.448 END TEST locking_overlapped_coremask 00:11:19.448 ************************************ 00:11:19.448 10:17:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:19.448 10:17:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:19.448 10:17:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.448 10:17:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:19.448 ************************************ 00:11:19.448 START TEST locking_overlapped_coremask_via_rpc 00:11:19.448 ************************************ 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60526 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60526 /var/tmp/spdk.sock 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60526 ']' 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:19.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.448 10:17:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.448 [2024-11-25 10:17:13.762082] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:19.448 [2024-11-25 10:17:13.762580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60526 ] 00:11:19.706 [2024-11-25 10:17:13.939615] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:19.706 [2024-11-25 10:17:13.939733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.963 [2024-11-25 10:17:14.096030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.963 [2024-11-25 10:17:14.096174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.963 [2024-11-25 10:17:14.096188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60550 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60550 /var/tmp/spdk2.sock 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60550 ']' 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.898 10:17:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.155 [2024-11-25 10:17:15.249981] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:21.155 [2024-11-25 10:17:15.250459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60550 ] 00:11:21.155 [2024-11-25 10:17:15.452134] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:21.155 [2024-11-25 10:17:15.452257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.721 [2024-11-25 10:17:15.774761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.721 [2024-11-25 10:17:15.774869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.722 [2024-11-25 10:17:15.774890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:24.296 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.297 [2024-11-25 10:17:18.067140] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60526 has claimed it. 00:11:24.297 request: 00:11:24.297 { 00:11:24.297 "method": "framework_enable_cpumask_locks", 00:11:24.297 "req_id": 1 00:11:24.297 } 00:11:24.297 Got JSON-RPC error response 00:11:24.297 response: 00:11:24.297 { 00:11:24.297 "code": -32603, 00:11:24.297 "message": "Failed to claim CPU core: 2" 00:11:24.297 } 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60526 /var/tmp/spdk.sock 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60526 ']' 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60550 /var/tmp/spdk2.sock 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60550 ']' 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.297 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.556 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.556 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:24.556 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:24.556 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:24.556 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:24.556 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:24.556 00:11:24.556 real 0m5.072s 00:11:24.556 user 0m1.857s 00:11:24.556 sys 0m0.246s 00:11:24.556 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.556 10:17:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.556 ************************************ 00:11:24.556 END TEST locking_overlapped_coremask_via_rpc 00:11:24.556 ************************************ 00:11:24.556 10:17:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:24.556 10:17:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60526 ]] 00:11:24.556 10:17:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60526 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60526 ']' 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60526 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60526 00:11:24.556 killing process with pid 60526 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60526' 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60526 00:11:24.556 10:17:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60526 00:11:27.082 10:17:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60550 ]] 00:11:27.082 10:17:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60550 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60550 ']' 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60550 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60550 00:11:27.082 killing process with pid 60550 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60550' 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60550 00:11:27.082 10:17:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60550 00:11:29.629 10:17:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:29.629 10:17:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:29.629 10:17:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60526 ]] 00:11:29.629 10:17:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60526 00:11:29.629 10:17:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60526 ']' 00:11:29.629 10:17:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60526 00:11:29.629 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60526) - No such process 00:11:29.629 10:17:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60526 is not found' 00:11:29.629 Process with pid 60526 is not found 00:11:29.629 10:17:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60550 ]] 00:11:29.629 10:17:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60550 00:11:29.629 Process with pid 60550 is not found 00:11:29.629 10:17:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60550 ']' 00:11:29.629 10:17:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60550 00:11:29.629 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60550) - No such process 00:11:29.629 10:17:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60550 is not found' 00:11:29.629 10:17:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:29.629 00:11:29.629 real 0m54.229s 00:11:29.629 user 1m33.608s 00:11:29.629 sys 0m8.517s 00:11:29.629 10:17:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.629 ************************************ 00:11:29.629 END TEST cpu_locks 00:11:29.629 ************************************ 00:11:29.629 10:17:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.629 ************************************ 00:11:29.629 END TEST event 00:11:29.629 ************************************ 00:11:29.629 00:11:29.629 real 1m27.833s 00:11:29.629 user 2m40.044s 00:11:29.629 sys 0m13.273s 00:11:29.629 10:17:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.629 10:17:23 event -- common/autotest_common.sh@10 -- # set +x 00:11:29.629 10:17:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:29.629 10:17:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.629 10:17:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.629 10:17:23 -- common/autotest_common.sh@10 -- # set +x 00:11:29.629 ************************************ 00:11:29.629 START TEST thread 00:11:29.629 ************************************ 00:11:29.629 10:17:23 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:29.629 * Looking for test storage... 00:11:29.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:29.629 10:17:23 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:29.629 10:17:23 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.629 10:17:23 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:11:29.888 10:17:24 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.888 10:17:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.888 10:17:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.889 10:17:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.889 10:17:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.889 10:17:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.889 10:17:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.889 10:17:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.889 10:17:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.889 10:17:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.889 10:17:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.889 10:17:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.889 10:17:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:29.889 10:17:24 thread -- scripts/common.sh@345 -- # : 1 00:11:29.889 10:17:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.889 10:17:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.889 10:17:24 thread -- scripts/common.sh@365 -- # decimal 1 00:11:29.889 10:17:24 thread -- scripts/common.sh@353 -- # local d=1 00:11:29.889 10:17:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.889 10:17:24 thread -- scripts/common.sh@355 -- # echo 1 00:11:29.889 10:17:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.889 10:17:24 thread -- scripts/common.sh@366 -- # decimal 2 00:11:29.889 10:17:24 thread -- scripts/common.sh@353 -- # local d=2 00:11:29.889 10:17:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.889 10:17:24 thread -- scripts/common.sh@355 -- # echo 2 00:11:29.889 10:17:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.889 10:17:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.889 10:17:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.889 10:17:24 thread -- scripts/common.sh@368 -- # return 0 00:11:29.889 10:17:24 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.889 10:17:24 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.889 --rc genhtml_branch_coverage=1 00:11:29.889 --rc genhtml_function_coverage=1 00:11:29.889 --rc genhtml_legend=1 00:11:29.889 --rc geninfo_all_blocks=1 00:11:29.889 --rc geninfo_unexecuted_blocks=1 00:11:29.889 00:11:29.889 ' 00:11:29.889 10:17:24 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.889 --rc genhtml_branch_coverage=1 00:11:29.889 --rc genhtml_function_coverage=1 00:11:29.889 --rc genhtml_legend=1 00:11:29.889 --rc geninfo_all_blocks=1 00:11:29.889 --rc geninfo_unexecuted_blocks=1 00:11:29.889 00:11:29.889 ' 00:11:29.889 10:17:24 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.889 --rc genhtml_branch_coverage=1 00:11:29.889 --rc genhtml_function_coverage=1 00:11:29.889 --rc genhtml_legend=1 00:11:29.889 --rc geninfo_all_blocks=1 00:11:29.889 --rc geninfo_unexecuted_blocks=1 00:11:29.889 00:11:29.889 ' 00:11:29.889 10:17:24 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.889 --rc genhtml_branch_coverage=1 00:11:29.889 --rc genhtml_function_coverage=1 00:11:29.889 --rc genhtml_legend=1 00:11:29.889 --rc geninfo_all_blocks=1 00:11:29.889 --rc geninfo_unexecuted_blocks=1 00:11:29.889 00:11:29.889 ' 00:11:29.889 10:17:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:29.889 10:17:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:29.889 10:17:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.889 10:17:24 thread -- common/autotest_common.sh@10 -- # set +x 00:11:29.889 ************************************ 00:11:29.889 START TEST thread_poller_perf 00:11:29.889 ************************************ 00:11:29.889 10:17:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:29.889 [2024-11-25 10:17:24.130153] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:29.889 [2024-11-25 10:17:24.130347] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60751 ] 00:11:30.148 [2024-11-25 10:17:24.322341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.406 [2024-11-25 10:17:24.495981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.406 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:31.782 [2024-11-25T10:17:26.115Z] ====================================== 00:11:31.782 [2024-11-25T10:17:26.115Z] busy:2216606346 (cyc) 00:11:31.782 [2024-11-25T10:17:26.115Z] total_run_count: 280000 00:11:31.782 [2024-11-25T10:17:26.115Z] tsc_hz: 2200000000 (cyc) 00:11:31.782 [2024-11-25T10:17:26.115Z] ====================================== 00:11:31.782 [2024-11-25T10:17:26.115Z] poller_cost: 7916 (cyc), 3598 (nsec) 00:11:31.782 00:11:31.782 real 0m1.674s 00:11:31.782 user 0m1.448s 00:11:31.782 sys 0m0.114s 00:11:31.782 10:17:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.782 10:17:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:31.782 ************************************ 00:11:31.782 END TEST thread_poller_perf 00:11:31.782 ************************************ 00:11:31.782 10:17:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:31.782 10:17:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:31.782 10:17:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.782 10:17:25 thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.782 ************************************ 00:11:31.782 START TEST thread_poller_perf 00:11:31.782 ************************************ 00:11:31.782 10:17:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:31.782 [2024-11-25 10:17:25.857078] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:31.782 [2024-11-25 10:17:25.857229] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60793 ] 00:11:31.782 [2024-11-25 10:17:26.034138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.041 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:32.041 [2024-11-25 10:17:26.165405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.415 [2024-11-25T10:17:27.748Z] ====================================== 00:11:33.415 [2024-11-25T10:17:27.748Z] busy:2204419860 (cyc) 00:11:33.415 [2024-11-25T10:17:27.748Z] total_run_count: 3818000 00:11:33.415 [2024-11-25T10:17:27.748Z] tsc_hz: 2200000000 (cyc) 00:11:33.415 [2024-11-25T10:17:27.748Z] ====================================== 00:11:33.415 [2024-11-25T10:17:27.748Z] poller_cost: 577 (cyc), 262 (nsec) 00:11:33.415 00:11:33.415 real 0m1.588s 00:11:33.415 user 0m1.375s 00:11:33.415 sys 0m0.103s 00:11:33.415 10:17:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.415 10:17:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:33.415 ************************************ 00:11:33.415 END TEST thread_poller_perf 00:11:33.415 ************************************ 00:11:33.415 10:17:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:33.415 00:11:33.415 real 0m3.578s 00:11:33.415 user 0m2.968s 00:11:33.415 sys 0m0.379s 00:11:33.415 10:17:27 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.415 10:17:27 thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.415 ************************************ 00:11:33.415 END TEST thread 00:11:33.415 ************************************ 00:11:33.415 10:17:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:33.415 10:17:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:33.415 10:17:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.415 10:17:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.415 10:17:27 -- common/autotest_common.sh@10 -- # set +x 00:11:33.415 ************************************ 00:11:33.415 START TEST app_cmdline 00:11:33.415 ************************************ 00:11:33.415 10:17:27 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:33.415 * Looking for test storage... 00:11:33.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:33.415 10:17:27 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.415 10:17:27 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.415 10:17:27 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.415 10:17:27 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:33.415 10:17:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.416 10:17:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.416 --rc genhtml_branch_coverage=1 00:11:33.416 --rc genhtml_function_coverage=1 00:11:33.416 --rc genhtml_legend=1 00:11:33.416 --rc geninfo_all_blocks=1 00:11:33.416 --rc geninfo_unexecuted_blocks=1 00:11:33.416 00:11:33.416 ' 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.416 --rc genhtml_branch_coverage=1 00:11:33.416 --rc genhtml_function_coverage=1 00:11:33.416 --rc genhtml_legend=1 00:11:33.416 --rc geninfo_all_blocks=1 00:11:33.416 --rc geninfo_unexecuted_blocks=1 00:11:33.416 00:11:33.416 ' 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.416 --rc genhtml_branch_coverage=1 00:11:33.416 --rc genhtml_function_coverage=1 00:11:33.416 --rc genhtml_legend=1 00:11:33.416 --rc geninfo_all_blocks=1 00:11:33.416 --rc geninfo_unexecuted_blocks=1 00:11:33.416 00:11:33.416 ' 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.416 --rc genhtml_branch_coverage=1 00:11:33.416 --rc genhtml_function_coverage=1 00:11:33.416 --rc genhtml_legend=1 00:11:33.416 --rc geninfo_all_blocks=1 00:11:33.416 --rc geninfo_unexecuted_blocks=1 00:11:33.416 00:11:33.416 ' 00:11:33.416 10:17:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:33.416 10:17:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60877 00:11:33.416 10:17:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60877 00:11:33.416 10:17:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60877 ']' 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.416 10:17:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:33.675 [2024-11-25 10:17:27.872882] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:33.675 [2024-11-25 10:17:27.873080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60877 ] 00:11:33.933 [2024-11-25 10:17:28.062120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.933 [2024-11-25 10:17:28.193694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.930 10:17:29 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.930 10:17:29 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:34.930 10:17:29 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:35.202 { 00:11:35.202 "version": "SPDK v25.01-pre git sha1 1e9cebf19", 00:11:35.202 "fields": { 00:11:35.202 "major": 25, 00:11:35.202 "minor": 1, 00:11:35.202 "patch": 0, 00:11:35.202 "suffix": "-pre", 00:11:35.202 "commit": "1e9cebf19" 00:11:35.202 } 00:11:35.202 } 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:35.202 10:17:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:35.202 10:17:29 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:35.480 request: 00:11:35.480 { 00:11:35.480 "method": "env_dpdk_get_mem_stats", 00:11:35.480 "req_id": 1 00:11:35.480 } 00:11:35.480 Got JSON-RPC error response 00:11:35.480 response: 00:11:35.480 { 00:11:35.480 "code": -32601, 00:11:35.480 "message": "Method not found" 00:11:35.480 } 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.480 10:17:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60877 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60877 ']' 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60877 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60877 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.480 killing process with pid 60877 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60877' 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 60877 00:11:35.480 10:17:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 60877 00:11:38.009 00:11:38.009 real 0m4.724s 00:11:38.009 user 0m5.172s 00:11:38.009 sys 0m0.748s 00:11:38.009 10:17:32 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.009 ************************************ 00:11:38.009 END TEST app_cmdline 00:11:38.009 10:17:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:38.009 ************************************ 00:11:38.009 10:17:32 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:38.009 10:17:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:38.009 10:17:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.009 10:17:32 -- common/autotest_common.sh@10 -- # set +x 00:11:38.009 ************************************ 00:11:38.009 START TEST version 00:11:38.009 ************************************ 00:11:38.009 10:17:32 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:38.267 * Looking for test storage... 00:11:38.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1693 -- # lcov --version 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:38.267 10:17:32 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.267 10:17:32 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.267 10:17:32 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.267 10:17:32 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.267 10:17:32 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.267 10:17:32 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.267 10:17:32 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.267 10:17:32 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.267 10:17:32 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.267 10:17:32 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.267 10:17:32 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.267 10:17:32 version -- scripts/common.sh@344 -- # case "$op" in 00:11:38.267 10:17:32 version -- scripts/common.sh@345 -- # : 1 00:11:38.267 10:17:32 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.267 10:17:32 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.267 10:17:32 version -- scripts/common.sh@365 -- # decimal 1 00:11:38.267 10:17:32 version -- scripts/common.sh@353 -- # local d=1 00:11:38.267 10:17:32 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.267 10:17:32 version -- scripts/common.sh@355 -- # echo 1 00:11:38.267 10:17:32 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.267 10:17:32 version -- scripts/common.sh@366 -- # decimal 2 00:11:38.267 10:17:32 version -- scripts/common.sh@353 -- # local d=2 00:11:38.267 10:17:32 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.267 10:17:32 version -- scripts/common.sh@355 -- # echo 2 00:11:38.267 10:17:32 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.267 10:17:32 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.267 10:17:32 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.267 10:17:32 version -- scripts/common.sh@368 -- # return 0 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:38.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.267 --rc genhtml_branch_coverage=1 00:11:38.267 --rc genhtml_function_coverage=1 00:11:38.267 --rc genhtml_legend=1 00:11:38.267 --rc geninfo_all_blocks=1 00:11:38.267 --rc geninfo_unexecuted_blocks=1 00:11:38.267 00:11:38.267 ' 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:38.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.267 --rc genhtml_branch_coverage=1 00:11:38.267 --rc genhtml_function_coverage=1 00:11:38.267 --rc genhtml_legend=1 00:11:38.267 --rc geninfo_all_blocks=1 00:11:38.267 --rc geninfo_unexecuted_blocks=1 00:11:38.267 00:11:38.267 ' 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:38.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.267 --rc genhtml_branch_coverage=1 00:11:38.267 --rc genhtml_function_coverage=1 00:11:38.267 --rc genhtml_legend=1 00:11:38.267 --rc geninfo_all_blocks=1 00:11:38.267 --rc geninfo_unexecuted_blocks=1 00:11:38.267 00:11:38.267 ' 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:38.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.267 --rc genhtml_branch_coverage=1 00:11:38.267 --rc genhtml_function_coverage=1 00:11:38.267 --rc genhtml_legend=1 00:11:38.267 --rc geninfo_all_blocks=1 00:11:38.267 --rc geninfo_unexecuted_blocks=1 00:11:38.267 00:11:38.267 ' 00:11:38.267 10:17:32 version -- app/version.sh@17 -- # get_header_version major 00:11:38.267 10:17:32 version -- app/version.sh@14 -- # cut -f2 00:11:38.267 10:17:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:38.267 10:17:32 version -- app/version.sh@14 -- # tr -d '"' 00:11:38.267 10:17:32 version -- app/version.sh@17 -- # major=25 00:11:38.267 10:17:32 version -- app/version.sh@18 -- # get_header_version minor 00:11:38.267 10:17:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:38.267 10:17:32 version -- app/version.sh@14 -- # cut -f2 00:11:38.267 10:17:32 version -- app/version.sh@14 -- # tr -d '"' 00:11:38.267 10:17:32 version -- app/version.sh@18 -- # minor=1 00:11:38.267 10:17:32 version -- app/version.sh@19 -- # get_header_version patch 00:11:38.267 10:17:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:38.267 10:17:32 version -- app/version.sh@14 -- # cut -f2 00:11:38.267 10:17:32 version -- app/version.sh@14 -- # tr -d '"' 00:11:38.267 10:17:32 version -- app/version.sh@19 -- # patch=0 00:11:38.267 10:17:32 version -- app/version.sh@20 -- # get_header_version suffix 00:11:38.267 10:17:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:38.267 10:17:32 version -- app/version.sh@14 -- # cut -f2 00:11:38.267 10:17:32 version -- app/version.sh@14 -- # tr -d '"' 00:11:38.267 10:17:32 version -- app/version.sh@20 -- # suffix=-pre 00:11:38.267 10:17:32 version -- app/version.sh@22 -- # version=25.1 00:11:38.267 10:17:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:38.267 10:17:32 version -- app/version.sh@28 -- # version=25.1rc0 00:11:38.267 10:17:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:38.267 10:17:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:38.267 10:17:32 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:38.267 10:17:32 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:38.267 00:11:38.267 real 0m0.266s 00:11:38.267 user 0m0.175s 00:11:38.267 sys 0m0.131s 00:11:38.267 10:17:32 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.267 10:17:32 version -- common/autotest_common.sh@10 -- # set +x 00:11:38.267 ************************************ 00:11:38.267 END TEST version 00:11:38.267 ************************************ 00:11:38.267 10:17:32 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:38.267 10:17:32 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:38.267 10:17:32 -- spdk/autotest.sh@194 -- # uname -s 00:11:38.267 10:17:32 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:38.267 10:17:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:38.267 10:17:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:38.267 10:17:32 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:11:38.267 10:17:32 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:38.267 10:17:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.267 10:17:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.267 10:17:32 -- common/autotest_common.sh@10 -- # set +x 00:11:38.525 ************************************ 00:11:38.525 START TEST blockdev_nvme 00:11:38.525 ************************************ 00:11:38.525 10:17:32 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:38.525 * Looking for test storage... 00:11:38.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:38.525 10:17:32 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:38.525 10:17:32 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:38.525 10:17:32 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:38.525 10:17:32 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.525 10:17:32 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:11:38.525 10:17:32 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.525 10:17:32 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:38.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.525 --rc genhtml_branch_coverage=1 00:11:38.525 --rc genhtml_function_coverage=1 00:11:38.525 --rc genhtml_legend=1 00:11:38.525 --rc geninfo_all_blocks=1 00:11:38.525 --rc geninfo_unexecuted_blocks=1 00:11:38.525 00:11:38.525 ' 00:11:38.525 10:17:32 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:38.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.525 --rc genhtml_branch_coverage=1 00:11:38.525 --rc genhtml_function_coverage=1 00:11:38.525 --rc genhtml_legend=1 00:11:38.525 --rc geninfo_all_blocks=1 00:11:38.525 --rc geninfo_unexecuted_blocks=1 00:11:38.525 00:11:38.525 ' 00:11:38.526 10:17:32 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:38.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.526 --rc genhtml_branch_coverage=1 00:11:38.526 --rc genhtml_function_coverage=1 00:11:38.526 --rc genhtml_legend=1 00:11:38.526 --rc geninfo_all_blocks=1 00:11:38.526 --rc geninfo_unexecuted_blocks=1 00:11:38.526 00:11:38.526 ' 00:11:38.526 10:17:32 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:38.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.526 --rc genhtml_branch_coverage=1 00:11:38.526 --rc genhtml_function_coverage=1 00:11:38.526 --rc genhtml_legend=1 00:11:38.526 --rc geninfo_all_blocks=1 00:11:38.526 --rc geninfo_unexecuted_blocks=1 00:11:38.526 00:11:38.526 ' 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:38.526 10:17:32 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61071 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:38.526 10:17:32 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61071 00:11:38.526 10:17:32 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61071 ']' 00:11:38.526 10:17:32 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.526 10:17:32 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.526 10:17:32 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.526 10:17:32 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.526 10:17:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.784 [2024-11-25 10:17:32.972252] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:38.784 [2024-11-25 10:17:32.972489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:11:39.042 [2024-11-25 10:17:33.161738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.042 [2024-11-25 10:17:33.367502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.414 10:17:34 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.414 10:17:34 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:11:40.414 10:17:34 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:40.414 10:17:34 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:11:40.414 10:17:34 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:11:40.414 10:17:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:40.414 10:17:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:40.415 10:17:34 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:40.415 10:17:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.415 10:17:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.415 10:17:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.415 10:17:34 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:40.415 10:17:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.415 10:17:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.415 10:17:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.415 10:17:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:11:40.415 10:17:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:40.415 10:17:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.415 10:17:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.674 10:17:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.674 10:17:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.674 10:17:34 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:40.674 10:17:34 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:40.674 10:17:34 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.674 10:17:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.674 10:17:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:40.674 10:17:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:40.675 10:17:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d0088464-d090-4399-ac74-7268cb8857a7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d0088464-d090-4399-ac74-7268cb8857a7",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a3a82044-b5ed-4876-aada-ce8c34029019"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a3a82044-b5ed-4876-aada-ce8c34029019",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3047fcf6-b9ac-4460-be85-ddf8ba181adf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3047fcf6-b9ac-4460-be85-ddf8ba181adf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4245db59-c616-4ff8-8029-08c429f77fba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4245db59-c616-4ff8-8029-08c429f77fba",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "842513c5-65a9-4ed7-a997-069d9721e651"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "842513c5-65a9-4ed7-a997-069d9721e651",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4c917879-2615-4e07-89ff-5b3697e450e3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4c917879-2615-4e07-89ff-5b3697e450e3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:40.675 10:17:34 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:40.675 10:17:34 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:11:40.675 10:17:34 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:40.675 10:17:34 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61071 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61071 ']' 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61071 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61071 00:11:40.675 killing process with pid 61071 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61071' 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61071 00:11:40.675 10:17:34 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61071 00:11:43.205 10:17:37 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:43.205 10:17:37 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:43.205 10:17:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:43.205 10:17:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.205 10:17:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:43.205 ************************************ 00:11:43.205 START TEST bdev_hello_world 00:11:43.205 ************************************ 00:11:43.205 10:17:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:43.205 [2024-11-25 10:17:37.414619] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:43.205 [2024-11-25 10:17:37.414832] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61166 ] 00:11:43.464 [2024-11-25 10:17:37.590553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.464 [2024-11-25 10:17:37.738635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.400 [2024-11-25 10:17:38.435979] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:44.400 [2024-11-25 10:17:38.436042] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:44.400 [2024-11-25 10:17:38.436077] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:44.400 [2024-11-25 10:17:38.439758] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:44.400 [2024-11-25 10:17:38.440289] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:44.400 [2024-11-25 10:17:38.440337] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:44.400 [2024-11-25 10:17:38.440506] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:44.400 00:11:44.400 [2024-11-25 10:17:38.440547] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:45.331 ************************************ 00:11:45.331 END TEST bdev_hello_world 00:11:45.331 ************************************ 00:11:45.331 00:11:45.331 real 0m2.275s 00:11:45.331 user 0m1.830s 00:11:45.331 sys 0m0.334s 00:11:45.331 10:17:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.331 10:17:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:45.331 10:17:39 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:45.331 10:17:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.331 10:17:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.331 10:17:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.331 ************************************ 00:11:45.331 START TEST bdev_bounds 00:11:45.331 ************************************ 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:45.331 Process bdevio pid: 61219 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61219 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61219' 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61219 00:11:45.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61219 ']' 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.331 10:17:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:45.589 [2024-11-25 10:17:39.780984] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:45.589 [2024-11-25 10:17:39.781244] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61219 ] 00:11:45.847 [2024-11-25 10:17:39.981861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:46.104 [2024-11-25 10:17:40.190345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.104 [2024-11-25 10:17:40.190475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.104 [2024-11-25 10:17:40.190485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.669 10:17:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.669 10:17:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:46.669 10:17:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:46.927 I/O targets: 00:11:46.927 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:46.927 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:46.927 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:46.927 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:46.927 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:46.927 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:46.927 00:11:46.927 00:11:46.927 CUnit - A unit testing framework for C - Version 2.1-3 00:11:46.927 http://cunit.sourceforge.net/ 00:11:46.927 00:11:46.927 00:11:46.927 Suite: bdevio tests on: Nvme3n1 00:11:46.927 Test: blockdev write read block ...passed 00:11:46.927 Test: blockdev write zeroes read block ...passed 00:11:46.927 Test: blockdev write zeroes read no split ...passed 00:11:46.927 Test: blockdev write zeroes read split ...passed 00:11:46.927 Test: blockdev write zeroes read split partial ...passed 00:11:46.927 Test: blockdev reset ...[2024-11-25 10:17:41.205347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:46.927 passed 00:11:46.927 Test: blockdev write read 8 blocks ...[2024-11-25 10:17:41.209717] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:46.927 passed 00:11:46.927 Test: blockdev write read size > 128k ...passed 00:11:46.927 Test: blockdev write read invalid size ...passed 00:11:46.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:46.927 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:46.927 Test: blockdev write read max offset ...passed 00:11:46.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:46.927 Test: blockdev writev readv 8 blocks ...passed 00:11:46.927 Test: blockdev writev readv 30 x 1block ...passed 00:11:46.927 Test: blockdev writev readv block ...passed 00:11:46.927 Test: blockdev writev readv size > 128k ...passed 00:11:46.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:46.927 Test: blockdev comparev and writev ...[2024-11-25 10:17:41.219290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c960a000 len:0x1000 00:11:46.927 passed 00:11:46.927 Test: blockdev nvme passthru rw ...[2024-11-25 10:17:41.219905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:46.927 passed 00:11:46.927 Test: blockdev nvme passthru vendor specific ...[2024-11-25 10:17:41.221041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:46.927 [2024-11-25 10:17:41.221183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:46.927 passed 00:11:46.927 Test: blockdev nvme admin passthru ...passed 00:11:46.927 Test: blockdev copy ...passed 00:11:46.927 Suite: bdevio tests on: Nvme2n3 00:11:46.927 Test: blockdev write read block ...passed 00:11:46.927 Test: blockdev write zeroes read block ...passed 00:11:46.927 Test: blockdev write zeroes read no split ...passed 00:11:46.927 Test: blockdev write zeroes read split ...passed 00:11:47.185 Test: blockdev write zeroes read split partial ...passed 00:11:47.185 Test: blockdev reset ...[2024-11-25 10:17:41.286215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:47.185 [2024-11-25 10:17:41.291229] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:47.185 passed 00:11:47.185 Test: blockdev write read 8 blocks ...passed 00:11:47.185 Test: blockdev write read size > 128k ...passed 00:11:47.185 Test: blockdev write read invalid size ...passed 00:11:47.185 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.185 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.185 Test: blockdev write read max offset ...passed 00:11:47.185 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.185 Test: blockdev writev readv 8 blocks ...passed 00:11:47.185 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.185 Test: blockdev writev readv block ...passed 00:11:47.185 Test: blockdev writev readv size > 128k ...passed 00:11:47.185 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.185 Test: blockdev comparev and writev ...[2024-11-25 10:17:41.300941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac806000 len:0x1000 00:11:47.185 [2024-11-25 10:17:41.301063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:47.185 passed 00:11:47.185 Test: blockdev nvme passthru rw ...passed 00:11:47.185 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.185 Test: blockdev nvme admin passthru ...[2024-11-25 10:17:41.301964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:47.185 [2024-11-25 10:17:41.302014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:47.185 passed 00:11:47.185 Test: blockdev copy ...passed 00:11:47.185 Suite: bdevio tests on: Nvme2n2 00:11:47.185 Test: blockdev write read block ...passed 00:11:47.185 Test: blockdev write zeroes read block ...passed 00:11:47.185 Test: blockdev write zeroes read no split ...passed 00:11:47.185 Test: blockdev write zeroes read split ...passed 00:11:47.185 Test: blockdev write zeroes read split partial ...passed 00:11:47.185 Test: blockdev reset ...[2024-11-25 10:17:41.380420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:47.185 passed 00:11:47.185 Test: blockdev write read 8 blocks ...[2024-11-25 10:17:41.385320] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:47.185 passed 00:11:47.185 Test: blockdev write read size > 128k ...passed 00:11:47.185 Test: blockdev write read invalid size ...passed 00:11:47.185 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.185 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.185 Test: blockdev write read max offset ...passed 00:11:47.185 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.185 Test: blockdev writev readv 8 blocks ...passed 00:11:47.185 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.185 Test: blockdev writev readv block ...passed 00:11:47.185 Test: blockdev writev readv size > 128k ...passed 00:11:47.185 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.185 Test: blockdev comparev and writev ...[2024-11-25 10:17:41.393081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e4e3c000 len:0x1000 00:11:47.185 [2024-11-25 10:17:41.393209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:47.185 passed 00:11:47.185 Test: blockdev nvme passthru rw ...passed 00:11:47.185 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.185 Test: blockdev nvme admin passthru ...[2024-11-25 10:17:41.394204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:47.185 [2024-11-25 10:17:41.394263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:47.185 passed 00:11:47.185 Test: blockdev copy ...passed 00:11:47.185 Suite: bdevio tests on: Nvme2n1 00:11:47.185 Test: blockdev write read block ...passed 00:11:47.185 Test: blockdev write zeroes read block ...passed 00:11:47.185 Test: blockdev write zeroes read no split ...passed 00:11:47.185 Test: blockdev write zeroes read split ...passed 00:11:47.185 Test: blockdev write zeroes read split partial ...passed 00:11:47.185 Test: blockdev reset ...[2024-11-25 10:17:41.461995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:47.185 [2024-11-25 10:17:41.466922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:47.185 passed 00:11:47.185 Test: blockdev write read 8 blocks ...passed 00:11:47.185 Test: blockdev write read size > 128k ...passed 00:11:47.185 Test: blockdev write read invalid size ...passed 00:11:47.185 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.185 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.185 Test: blockdev write read max offset ...passed 00:11:47.185 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.185 Test: blockdev writev readv 8 blocks ...passed 00:11:47.185 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.185 Test: blockdev writev readv block ...passed 00:11:47.185 Test: blockdev writev readv size > 128k ...passed 00:11:47.185 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.185 Test: blockdev comparev and writev ...[2024-11-25 10:17:41.475978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e4e38000 len:0x1000 00:11:47.185 [2024-11-25 10:17:41.476104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:47.185 passed 00:11:47.185 Test: blockdev nvme passthru rw ...passed 00:11:47.185 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.185 Test: blockdev nvme admin passthru ...[2024-11-25 10:17:41.477164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:47.185 [2024-11-25 10:17:41.477221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:47.185 passed 00:11:47.185 Test: blockdev copy ...passed 00:11:47.185 Suite: bdevio tests on: Nvme1n1 00:11:47.185 Test: blockdev write read block ...passed 00:11:47.185 Test: blockdev write zeroes read block ...passed 00:11:47.185 Test: blockdev write zeroes read no split ...passed 00:11:47.443 Test: blockdev write zeroes read split ...passed 00:11:47.443 Test: blockdev write zeroes read split partial ...passed 00:11:47.443 Test: blockdev reset ...[2024-11-25 10:17:41.545164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:47.443 [2024-11-25 10:17:41.549699] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:47.443 passed 00:11:47.443 Test: blockdev write read 8 blocks ...passed 00:11:47.443 Test: blockdev write read size > 128k ...passed 00:11:47.443 Test: blockdev write read invalid size ...passed 00:11:47.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.443 Test: blockdev write read max offset ...passed 00:11:47.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.443 Test: blockdev writev readv 8 blocks ...passed 00:11:47.443 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.443 Test: blockdev writev readv block ...passed 00:11:47.443 Test: blockdev writev readv size > 128k ...passed 00:11:47.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.443 Test: blockdev comparev and writev ...[2024-11-25 10:17:41.558719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e4e34000 len:0x1000 00:11:47.443 [2024-11-25 10:17:41.558865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:47.443 passed 00:11:47.443 Test: blockdev nvme passthru rw ...passed 00:11:47.443 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.443 Test: blockdev nvme admin passthru ...[2024-11-25 10:17:41.559883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:47.443 [2024-11-25 10:17:41.559939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:47.443 passed 00:11:47.443 Test: blockdev copy ...passed 00:11:47.443 Suite: bdevio tests on: Nvme0n1 00:11:47.443 Test: blockdev write read block ...passed 00:11:47.443 Test: blockdev write zeroes read block ...passed 00:11:47.443 Test: blockdev write zeroes read no split ...passed 00:11:47.443 Test: blockdev write zeroes read split ...passed 00:11:47.443 Test: blockdev write zeroes read split partial ...passed 00:11:47.443 Test: blockdev reset ...[2024-11-25 10:17:41.625734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:47.443 [2024-11-25 10:17:41.630048] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:47.443 passed 00:11:47.443 Test: blockdev write read 8 blocks ...passed 00:11:47.443 Test: blockdev write read size > 128k ...passed 00:11:47.443 Test: blockdev write read invalid size ...passed 00:11:47.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.443 Test: blockdev write read max offset ...passed 00:11:47.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.443 Test: blockdev writev readv 8 blocks ...passed 00:11:47.443 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.443 Test: blockdev writev readv block ...passed 00:11:47.443 Test: blockdev writev readv size > 128k ...passed 00:11:47.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.443 Test: blockdev comparev and writev ...[2024-11-25 10:17:41.637465] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:47.443 separate metadata which is not supported yet. 00:11:47.443 passed 00:11:47.443 Test: blockdev nvme passthru rw ...passed 00:11:47.443 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.443 Test: blockdev nvme admin passthru ...[2024-11-25 10:17:41.638283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:47.443 [2024-11-25 10:17:41.638414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:47.443 passed 00:11:47.443 Test: blockdev copy ...passed 00:11:47.443 00:11:47.443 Run Summary: Type Total Ran Passed Failed Inactive 00:11:47.443 suites 6 6 n/a 0 0 00:11:47.443 tests 138 138 138 0 0 00:11:47.443 asserts 893 893 893 0 n/a 00:11:47.443 00:11:47.443 Elapsed time = 1.372 seconds 00:11:47.443 0 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61219 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61219 ']' 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61219 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61219 00:11:47.443 killing process with pid 61219 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61219' 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61219 00:11:47.443 10:17:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61219 00:11:48.817 10:17:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:48.817 00:11:48.817 real 0m3.123s 00:11:48.817 user 0m7.798s 00:11:48.817 sys 0m0.537s 00:11:48.817 10:17:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.817 ************************************ 00:11:48.817 END TEST bdev_bounds 00:11:48.817 ************************************ 00:11:48.817 10:17:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:48.817 10:17:42 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:48.817 10:17:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:48.817 10:17:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.817 10:17:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:48.817 ************************************ 00:11:48.817 START TEST bdev_nbd 00:11:48.817 ************************************ 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61284 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61284 /var/tmp/spdk-nbd.sock 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61284 ']' 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.817 10:17:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:48.817 [2024-11-25 10:17:42.978498] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:11:48.817 [2024-11-25 10:17:42.978822] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.074 [2024-11-25 10:17:43.190127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.074 [2024-11-25 10:17:43.324053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:50.006 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:50.264 1+0 records in 00:11:50.264 1+0 records out 00:11:50.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646 s, 6.3 MB/s 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:50.264 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:50.522 1+0 records in 00:11:50.522 1+0 records out 00:11:50.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737215 s, 5.6 MB/s 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:50.522 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.780 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:50.780 10:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:50.780 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:50.780 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:50.780 10:17:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.037 1+0 records in 00:11:51.037 1+0 records out 00:11:51.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778797 s, 5.3 MB/s 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:51.037 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.295 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.296 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.296 1+0 records in 00:11:51.296 1+0 records out 00:11:51.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780884 s, 5.2 MB/s 00:11:51.296 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.296 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:51.296 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.296 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.296 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:51.296 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:51.296 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:51.296 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.553 1+0 records in 00:11:51.553 1+0 records out 00:11:51.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070596 s, 5.8 MB/s 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:51.553 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:51.554 10:17:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.119 1+0 records in 00:11:52.119 1+0 records out 00:11:52.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688088 s, 6.0 MB/s 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd0", 00:11:52.119 "bdev_name": "Nvme0n1" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd1", 00:11:52.119 "bdev_name": "Nvme1n1" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd2", 00:11:52.119 "bdev_name": "Nvme2n1" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd3", 00:11:52.119 "bdev_name": "Nvme2n2" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd4", 00:11:52.119 "bdev_name": "Nvme2n3" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd5", 00:11:52.119 "bdev_name": "Nvme3n1" 00:11:52.119 } 00:11:52.119 ]' 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:52.119 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd0", 00:11:52.119 "bdev_name": "Nvme0n1" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd1", 00:11:52.119 "bdev_name": "Nvme1n1" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd2", 00:11:52.119 "bdev_name": "Nvme2n1" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd3", 00:11:52.119 "bdev_name": "Nvme2n2" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd4", 00:11:52.119 "bdev_name": "Nvme2n3" 00:11:52.119 }, 00:11:52.119 { 00:11:52.119 "nbd_device": "/dev/nbd5", 00:11:52.119 "bdev_name": "Nvme3n1" 00:11:52.119 } 00:11:52.119 ]' 00:11:52.377 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:52.377 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.377 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:52.377 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:52.377 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:52.377 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.377 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.635 10:17:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.924 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.209 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.467 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:53.726 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:53.726 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:53.726 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:53.726 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.726 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.726 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:53.726 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:53.726 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.726 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.727 10:17:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:53.985 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.244 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:54.502 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:54.760 /dev/nbd0 00:11:54.760 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:54.760 10:17:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:54.760 10:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:54.760 10:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:54.760 10:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:54.760 10:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:54.760 10:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:54.760 10:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:54.760 10:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:54.761 10:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:54.761 10:17:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.761 1+0 records in 00:11:54.761 1+0 records out 00:11:54.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672016 s, 6.1 MB/s 00:11:54.761 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.761 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:54.761 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.761 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:54.761 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:54.761 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.761 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:54.761 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:11:55.019 /dev/nbd1 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.019 1+0 records in 00:11:55.019 1+0 records out 00:11:55.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759129 s, 5.4 MB/s 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:55.019 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:11:55.277 /dev/nbd10 00:11:55.277 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.534 1+0 records in 00:11:55.534 1+0 records out 00:11:55.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713131 s, 5.7 MB/s 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:55.534 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:11:55.791 /dev/nbd11 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.791 1+0 records in 00:11:55.791 1+0 records out 00:11:55.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668968 s, 6.1 MB/s 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.791 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:55.792 10:17:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:11:56.049 /dev/nbd12 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.049 1+0 records in 00:11:56.049 1+0 records out 00:11:56.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104603 s, 3.9 MB/s 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:56.049 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:11:56.307 /dev/nbd13 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.307 1+0 records in 00:11:56.307 1+0 records out 00:11:56.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000779061 s, 5.3 MB/s 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.307 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:56.565 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd0", 00:11:56.565 "bdev_name": "Nvme0n1" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd1", 00:11:56.565 "bdev_name": "Nvme1n1" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd10", 00:11:56.565 "bdev_name": "Nvme2n1" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd11", 00:11:56.565 "bdev_name": "Nvme2n2" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd12", 00:11:56.565 "bdev_name": "Nvme2n3" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd13", 00:11:56.565 "bdev_name": "Nvme3n1" 00:11:56.565 } 00:11:56.565 ]' 00:11:56.565 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd0", 00:11:56.565 "bdev_name": "Nvme0n1" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd1", 00:11:56.565 "bdev_name": "Nvme1n1" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd10", 00:11:56.565 "bdev_name": "Nvme2n1" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd11", 00:11:56.565 "bdev_name": "Nvme2n2" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd12", 00:11:56.565 "bdev_name": "Nvme2n3" 00:11:56.565 }, 00:11:56.565 { 00:11:56.565 "nbd_device": "/dev/nbd13", 00:11:56.565 "bdev_name": "Nvme3n1" 00:11:56.565 } 00:11:56.565 ]' 00:11:56.565 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:56.823 /dev/nbd1 00:11:56.823 /dev/nbd10 00:11:56.823 /dev/nbd11 00:11:56.823 /dev/nbd12 00:11:56.823 /dev/nbd13' 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:56.823 /dev/nbd1 00:11:56.823 /dev/nbd10 00:11:56.823 /dev/nbd11 00:11:56.823 /dev/nbd12 00:11:56.823 /dev/nbd13' 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:56.823 256+0 records in 00:11:56.823 256+0 records out 00:11:56.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00734271 s, 143 MB/s 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.823 10:17:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:56.823 256+0 records in 00:11:56.823 256+0 records out 00:11:56.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121806 s, 8.6 MB/s 00:11:56.823 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.823 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:57.082 256+0 records in 00:11:57.082 256+0 records out 00:11:57.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118911 s, 8.8 MB/s 00:11:57.082 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.082 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:57.082 256+0 records in 00:11:57.082 256+0 records out 00:11:57.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120723 s, 8.7 MB/s 00:11:57.082 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.082 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:57.340 256+0 records in 00:11:57.340 256+0 records out 00:11:57.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13151 s, 8.0 MB/s 00:11:57.340 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.340 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:57.340 256+0 records in 00:11:57.340 256+0 records out 00:11:57.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122837 s, 8.5 MB/s 00:11:57.340 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.340 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:57.597 256+0 records in 00:11:57.597 256+0 records out 00:11:57.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144292 s, 7.3 MB/s 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.597 10:17:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.855 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.113 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.680 10:17:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.938 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.196 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.454 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:59.711 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:59.711 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:59.711 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:59.711 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:59.711 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:59.711 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:59.711 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:59.711 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:59.711 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:59.712 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:59.712 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:59.712 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:59.712 10:17:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:59.712 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.712 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:59.712 10:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:59.969 malloc_lvol_verify 00:11:59.969 10:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:00.535 20732f4c-e2b7-4860-891c-30ed441580e6 00:12:00.535 10:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:00.535 f0fa4c7f-4c2c-4cbb-a6cd-44a3b501c2c3 00:12:00.535 10:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:01.101 /dev/nbd0 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:01.101 mke2fs 1.47.0 (5-Feb-2023) 00:12:01.101 Discarding device blocks: 0/4096 done 00:12:01.101 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:01.101 00:12:01.101 Allocating group tables: 0/1 done 00:12:01.101 Writing inode tables: 0/1 done 00:12:01.101 Creating journal (1024 blocks): done 00:12:01.101 Writing superblocks and filesystem accounting information: 0/1 done 00:12:01.101 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.101 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:01.359 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:01.359 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:01.359 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:01.359 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.359 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61284 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61284 ']' 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61284 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61284 00:12:01.360 killing process with pid 61284 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61284' 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61284 00:12:01.360 10:17:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61284 00:12:02.732 10:17:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:02.732 00:12:02.732 real 0m13.917s 00:12:02.732 user 0m20.349s 00:12:02.732 sys 0m4.284s 00:12:02.732 10:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.732 ************************************ 00:12:02.732 END TEST bdev_nbd 00:12:02.732 ************************************ 00:12:02.732 10:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 10:17:56 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:02.732 10:17:56 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:12:02.732 skipping fio tests on NVMe due to multi-ns failures. 00:12:02.732 10:17:56 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:02.732 10:17:56 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:02.732 10:17:56 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:02.732 10:17:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:02.732 10:17:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.732 10:17:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 ************************************ 00:12:02.732 START TEST bdev_verify 00:12:02.732 ************************************ 00:12:02.732 10:17:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:02.732 [2024-11-25 10:17:56.938715] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:12:02.732 [2024-11-25 10:17:56.939003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 00:12:02.989 [2024-11-25 10:17:57.128450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:02.989 [2024-11-25 10:17:57.308713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.989 [2024-11-25 10:17:57.308718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.922 Running I/O for 5 seconds... 00:12:06.229 18112.00 IOPS, 70.75 MiB/s [2024-11-25T10:18:01.495Z] 18816.00 IOPS, 73.50 MiB/s [2024-11-25T10:18:02.431Z] 19456.00 IOPS, 76.00 MiB/s [2024-11-25T10:18:03.385Z] 19456.00 IOPS, 76.00 MiB/s [2024-11-25T10:18:03.385Z] 19276.80 IOPS, 75.30 MiB/s 00:12:09.052 Latency(us) 00:12:09.052 [2024-11-25T10:18:03.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.052 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x0 length 0xbd0bd 00:12:09.052 Nvme0n1 : 5.09 1585.67 6.19 0.00 0.00 80512.83 17396.83 84362.71 00:12:09.052 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:09.052 Nvme0n1 : 5.07 1589.36 6.21 0.00 0.00 80333.18 18469.24 77689.95 00:12:09.052 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x0 length 0xa0000 00:12:09.052 Nvme1n1 : 5.09 1585.01 6.19 0.00 0.00 80411.48 17515.99 80549.70 00:12:09.052 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0xa0000 length 0xa0000 00:12:09.052 Nvme1n1 : 5.08 1588.80 6.21 0.00 0.00 80177.42 18230.92 71493.82 00:12:09.052 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x0 length 0x80000 00:12:09.052 Nvme2n1 : 5.09 1583.67 6.19 0.00 0.00 80303.34 19899.11 76736.70 00:12:09.052 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x80000 length 0x80000 00:12:09.052 Nvme2n1 : 5.08 1588.21 6.20 0.00 0.00 80017.12 17277.67 70063.94 00:12:09.052 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x0 length 0x80000 00:12:09.052 Nvme2n2 : 5.09 1583.06 6.18 0.00 0.00 80167.61 20256.58 76736.70 00:12:09.052 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x80000 length 0x80000 00:12:09.052 Nvme2n2 : 5.08 1587.69 6.20 0.00 0.00 79865.86 17039.36 68634.07 00:12:09.052 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x0 length 0x80000 00:12:09.052 Nvme2n3 : 5.10 1582.44 6.18 0.00 0.00 80027.71 20494.89 81026.33 00:12:09.052 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x80000 length 0x80000 00:12:09.052 Nvme2n3 : 5.08 1587.09 6.20 0.00 0.00 79717.86 16681.89 71970.44 00:12:09.052 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x0 length 0x20000 00:12:09.052 Nvme3n1 : 5.10 1581.82 6.18 0.00 0.00 79884.50 14537.08 84839.33 00:12:09.052 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.052 Verification LBA range: start 0x20000 length 0x20000 00:12:09.052 Nvme3n1 : 5.08 1586.49 6.20 0.00 0.00 79580.46 10962.39 75306.82 00:12:09.052 [2024-11-25T10:18:03.385Z] =================================================================================================================== 00:12:09.052 [2024-11-25T10:18:03.385Z] Total : 19029.30 74.33 0.00 0.00 80083.28 10962.39 84839.33 00:12:10.951 00:12:10.951 real 0m8.279s 00:12:10.951 user 0m14.935s 00:12:10.951 sys 0m0.430s 00:12:10.951 10:18:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.951 10:18:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:10.951 ************************************ 00:12:10.951 END TEST bdev_verify 00:12:10.951 ************************************ 00:12:10.951 10:18:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:10.951 10:18:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:10.951 10:18:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.951 10:18:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.951 ************************************ 00:12:10.951 START TEST bdev_verify_big_io 00:12:10.951 ************************************ 00:12:10.951 10:18:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:10.951 [2024-11-25 10:18:05.219161] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:12:10.951 [2024-11-25 10:18:05.219344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61807 ] 00:12:11.209 [2024-11-25 10:18:05.402651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:11.466 [2024-11-25 10:18:05.610203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.466 [2024-11-25 10:18:05.610203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.399 Running I/O for 5 seconds... 00:12:16.369 1460.00 IOPS, 91.25 MiB/s [2024-11-25T10:18:12.603Z] 1857.50 IOPS, 116.09 MiB/s [2024-11-25T10:18:12.604Z] 2680.00 IOPS, 167.50 MiB/s [2024-11-25T10:18:12.604Z] 2494.50 IOPS, 155.91 MiB/s 00:12:18.271 Latency(us) 00:12:18.271 [2024-11-25T10:18:12.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.271 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x0 length 0xbd0b 00:12:18.271 Nvme0n1 : 5.72 132.68 8.29 0.00 0.00 937455.08 21328.99 907494.87 00:12:18.271 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:18.271 Nvme0n1 : 5.54 143.63 8.98 0.00 0.00 855962.81 31457.28 899868.86 00:12:18.271 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x0 length 0xa000 00:12:18.271 Nvme1n1 : 5.72 134.18 8.39 0.00 0.00 903331.53 77689.95 827421.79 00:12:18.271 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0xa000 length 0xa000 00:12:18.271 Nvme1n1 : 5.63 144.12 9.01 0.00 0.00 839183.64 89605.59 1082893.03 00:12:18.271 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x0 length 0x8000 00:12:18.271 Nvme2n1 : 5.73 134.11 8.38 0.00 0.00 877655.20 78643.20 815982.78 00:12:18.271 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x8000 length 0x8000 00:12:18.271 Nvme2n1 : 5.68 150.07 9.38 0.00 0.00 788889.30 45279.42 1090519.04 00:12:18.271 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x0 length 0x8000 00:12:18.271 Nvme2n2 : 5.76 136.71 8.54 0.00 0.00 840317.89 33602.09 808356.77 00:12:18.271 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x8000 length 0x8000 00:12:18.271 Nvme2n2 : 5.73 156.30 9.77 0.00 0.00 740318.22 43849.54 842673.80 00:12:18.271 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x0 length 0x8000 00:12:18.271 Nvme2n3 : 5.81 135.62 8.48 0.00 0.00 823873.79 22639.71 1662469.59 00:12:18.271 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x8000 length 0x8000 00:12:18.271 Nvme2n3 : 5.79 160.64 10.04 0.00 0.00 700373.69 47662.55 861738.82 00:12:18.271 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x0 length 0x2000 00:12:18.271 Nvme3n1 : 5.83 150.64 9.41 0.00 0.00 725515.30 3678.95 1700599.62 00:12:18.271 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:18.271 Verification LBA range: start 0x2000 length 0x2000 00:12:18.271 Nvme3n1 : 5.80 172.97 10.81 0.00 0.00 635813.77 4557.73 880803.84 00:12:18.271 [2024-11-25T10:18:12.604Z] =================================================================================================================== 00:12:18.271 [2024-11-25T10:18:12.604Z] Total : 1751.66 109.48 0.00 0.00 798459.42 3678.95 1700599.62 00:12:20.201 00:12:20.201 real 0m9.013s 00:12:20.201 user 0m16.566s 00:12:20.201 sys 0m0.421s 00:12:20.201 10:18:14 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.201 10:18:14 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.201 ************************************ 00:12:20.201 END TEST bdev_verify_big_io 00:12:20.201 ************************************ 00:12:20.201 10:18:14 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:20.201 10:18:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:20.201 10:18:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.201 10:18:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.201 ************************************ 00:12:20.201 START TEST bdev_write_zeroes 00:12:20.201 ************************************ 00:12:20.201 10:18:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:20.201 [2024-11-25 10:18:14.297172] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:12:20.201 [2024-11-25 10:18:14.297356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61921 ] 00:12:20.201 [2024-11-25 10:18:14.484075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.459 [2024-11-25 10:18:14.635409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.394 Running I/O for 1 seconds... 00:12:22.329 45312.00 IOPS, 177.00 MiB/s 00:12:22.329 Latency(us) 00:12:22.329 [2024-11-25T10:18:16.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.329 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:22.329 Nvme0n1 : 1.03 7517.66 29.37 0.00 0.00 16975.36 13285.93 31695.59 00:12:22.329 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:22.329 Nvme1n1 : 1.03 7504.75 29.32 0.00 0.00 16973.15 13524.25 30742.34 00:12:22.329 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:22.329 Nvme2n1 : 1.03 7492.00 29.27 0.00 0.00 16950.43 13226.36 29312.47 00:12:22.329 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:22.329 Nvme2n2 : 1.04 7479.87 29.22 0.00 0.00 16882.67 10962.39 28478.37 00:12:22.329 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:22.329 Nvme2n3 : 1.04 7467.72 29.17 0.00 0.00 16872.63 10068.71 29550.78 00:12:22.329 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:22.329 Nvme3n1 : 1.04 7455.24 29.12 0.00 0.00 16838.01 8400.52 31933.91 00:12:22.329 [2024-11-25T10:18:16.662Z] =================================================================================================================== 00:12:22.329 [2024-11-25T10:18:16.662Z] Total : 44917.24 175.46 0.00 0.00 16915.37 8400.52 31933.91 00:12:23.264 00:12:23.264 real 0m3.397s 00:12:23.264 user 0m2.910s 00:12:23.264 sys 0m0.358s 00:12:23.264 10:18:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.264 ************************************ 00:12:23.264 END TEST bdev_write_zeroes 00:12:23.264 ************************************ 00:12:23.264 10:18:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:23.523 10:18:17 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:23.523 10:18:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:23.523 10:18:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.523 10:18:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.523 ************************************ 00:12:23.523 START TEST bdev_json_nonenclosed 00:12:23.523 ************************************ 00:12:23.523 10:18:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:23.523 [2024-11-25 10:18:17.739681] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:12:23.523 [2024-11-25 10:18:17.739885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61980 ] 00:12:23.781 [2024-11-25 10:18:17.927468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.781 [2024-11-25 10:18:18.095222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.781 [2024-11-25 10:18:18.095369] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:23.781 [2024-11-25 10:18:18.095401] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:23.781 [2024-11-25 10:18:18.095418] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:24.349 00:12:24.349 real 0m0.752s 00:12:24.349 user 0m0.500s 00:12:24.349 sys 0m0.146s 00:12:24.349 10:18:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.349 10:18:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:24.349 ************************************ 00:12:24.349 END TEST bdev_json_nonenclosed 00:12:24.349 ************************************ 00:12:24.349 10:18:18 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:24.349 10:18:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:24.349 10:18:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.349 10:18:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.349 ************************************ 00:12:24.349 START TEST bdev_json_nonarray 00:12:24.349 ************************************ 00:12:24.349 10:18:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:24.349 [2024-11-25 10:18:18.547763] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:12:24.349 [2024-11-25 10:18:18.547999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62011 ] 00:12:24.607 [2024-11-25 10:18:18.733931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.607 [2024-11-25 10:18:18.885974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.607 [2024-11-25 10:18:18.886149] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:24.607 [2024-11-25 10:18:18.886184] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:24.607 [2024-11-25 10:18:18.886200] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:24.866 00:12:24.866 real 0m0.746s 00:12:24.866 user 0m0.486s 00:12:24.866 sys 0m0.154s 00:12:24.866 10:18:19 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.866 10:18:19 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:24.866 ************************************ 00:12:24.866 END TEST bdev_json_nonarray 00:12:24.866 ************************************ 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:12:25.150 10:18:19 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:12:25.150 00:12:25.150 real 0m46.634s 00:12:25.150 user 1m10.134s 00:12:25.150 sys 0m7.839s 00:12:25.150 10:18:19 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.150 10:18:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.150 ************************************ 00:12:25.150 END TEST blockdev_nvme 00:12:25.150 ************************************ 00:12:25.150 10:18:19 -- spdk/autotest.sh@209 -- # uname -s 00:12:25.150 10:18:19 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:12:25.150 10:18:19 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:25.150 10:18:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.150 10:18:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.150 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:12:25.150 ************************************ 00:12:25.150 START TEST blockdev_nvme_gpt 00:12:25.150 ************************************ 00:12:25.150 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:25.150 * Looking for test storage... 00:12:25.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:25.150 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:25.150 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:12:25.150 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:25.150 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.151 10:18:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:12:25.151 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.151 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:25.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.151 --rc genhtml_branch_coverage=1 00:12:25.151 --rc genhtml_function_coverage=1 00:12:25.151 --rc genhtml_legend=1 00:12:25.151 --rc geninfo_all_blocks=1 00:12:25.151 --rc geninfo_unexecuted_blocks=1 00:12:25.151 00:12:25.151 ' 00:12:25.151 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:25.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.151 --rc genhtml_branch_coverage=1 00:12:25.151 --rc genhtml_function_coverage=1 00:12:25.151 --rc genhtml_legend=1 00:12:25.151 --rc geninfo_all_blocks=1 00:12:25.151 --rc geninfo_unexecuted_blocks=1 00:12:25.151 00:12:25.151 ' 00:12:25.151 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:25.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.151 --rc genhtml_branch_coverage=1 00:12:25.151 --rc genhtml_function_coverage=1 00:12:25.151 --rc genhtml_legend=1 00:12:25.151 --rc geninfo_all_blocks=1 00:12:25.151 --rc geninfo_unexecuted_blocks=1 00:12:25.151 00:12:25.151 ' 00:12:25.409 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:25.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.409 --rc genhtml_branch_coverage=1 00:12:25.409 --rc genhtml_function_coverage=1 00:12:25.409 --rc genhtml_legend=1 00:12:25.409 --rc geninfo_all_blocks=1 00:12:25.409 --rc geninfo_unexecuted_blocks=1 00:12:25.409 00:12:25.409 ' 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62095 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:25.409 10:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62095 00:12:25.409 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62095 ']' 00:12:25.409 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.409 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.409 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.409 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.409 10:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:25.409 [2024-11-25 10:18:19.623281] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:12:25.409 [2024-11-25 10:18:19.623459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62095 ] 00:12:25.669 [2024-11-25 10:18:19.804605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.669 [2024-11-25 10:18:19.954286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.045 10:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.045 10:18:20 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:12:27.045 10:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:27.045 10:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:12:27.045 10:18:20 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:27.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:27.302 Waiting for block devices as requested 00:12:27.302 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.302 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.561 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.561 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.824 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:12:32.824 BYT; 00:12:32.824 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:12:32.824 BYT; 00:12:32.824 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:32.824 10:18:26 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:32.824 10:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:12:33.759 The operation has completed successfully. 00:12:33.759 10:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:12:34.693 The operation has completed successfully. 00:12:34.693 10:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:35.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:35.827 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.827 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.827 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.827 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.086 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:12:36.086 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.086 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:36.086 [] 00:12:36.086 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.086 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:12:36.086 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:12:36.086 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:36.086 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:36.086 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:36.086 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.086 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:36.345 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.345 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:36.345 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.345 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:36.345 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.345 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:12:36.345 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:36.345 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.345 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:36.345 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.345 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:36.345 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.345 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:36.605 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.605 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:36.605 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.605 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:36.605 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.605 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:36.605 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:36.605 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:36.605 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.605 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:36.605 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.605 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:36.605 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:36.606 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "dc7a60db-d05e-472b-a7e3-5a0ed7e16b0a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dc7a60db-d05e-472b-a7e3-5a0ed7e16b0a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9bc0f96e-294e-44e4-a587-7ffeec54acfc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9bc0f96e-294e-44e4-a587-7ffeec54acfc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "292c7b94-f4b8-4644-b033-1a303bdaca9a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "292c7b94-f4b8-4644-b033-1a303bdaca9a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "bde7d8c4-8167-45c7-8a7b-3b76fcfa2013"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bde7d8c4-8167-45c7-8a7b-3b76fcfa2013",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9d7ecbc8-942b-4329-b9f4-20cc96e0cdee"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9d7ecbc8-942b-4329-b9f4-20cc96e0cdee",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:36.606 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:36.606 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:12:36.606 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:36.606 10:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62095 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62095 ']' 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62095 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62095 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.606 killing process with pid 62095 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62095' 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62095 00:12:36.606 10:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62095 00:12:39.192 10:18:33 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:39.192 10:18:33 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:39.192 10:18:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:39.192 10:18:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.192 10:18:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:39.192 ************************************ 00:12:39.192 START TEST bdev_hello_world 00:12:39.192 ************************************ 00:12:39.192 10:18:33 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:39.192 [2024-11-25 10:18:33.430469] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:12:39.192 [2024-11-25 10:18:33.430717] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62726 ] 00:12:39.450 [2024-11-25 10:18:33.629272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.450 [2024-11-25 10:18:33.778378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.383 [2024-11-25 10:18:34.481760] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:40.383 [2024-11-25 10:18:34.481846] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:40.383 [2024-11-25 10:18:34.481884] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:40.383 [2024-11-25 10:18:34.485221] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:40.383 [2024-11-25 10:18:34.485786] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:40.383 [2024-11-25 10:18:34.485827] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:40.383 [2024-11-25 10:18:34.485983] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:40.383 00:12:40.383 [2024-11-25 10:18:34.486021] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:41.319 00:12:41.320 real 0m2.341s 00:12:41.320 user 0m1.888s 00:12:41.320 sys 0m0.340s 00:12:41.320 10:18:35 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.320 10:18:35 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:41.320 ************************************ 00:12:41.320 END TEST bdev_hello_world 00:12:41.320 ************************************ 00:12:41.577 10:18:35 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:41.577 10:18:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:41.577 10:18:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.577 10:18:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:41.577 ************************************ 00:12:41.577 START TEST bdev_bounds 00:12:41.578 ************************************ 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62776 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:41.578 Process bdevio pid: 62776 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62776' 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62776 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62776 ']' 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.578 10:18:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:41.578 [2024-11-25 10:18:35.809508] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:12:41.578 [2024-11-25 10:18:35.810440] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62776 ] 00:12:41.836 [2024-11-25 10:18:35.999645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:41.836 [2024-11-25 10:18:36.161736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.836 [2024-11-25 10:18:36.161882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.836 [2024-11-25 10:18:36.161893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.772 10:18:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.772 10:18:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:42.772 10:18:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:42.772 I/O targets: 00:12:42.772 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:42.772 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:12:42.772 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:12:42.772 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:42.772 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:42.772 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:42.772 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:42.772 00:12:42.772 00:12:42.772 CUnit - A unit testing framework for C - Version 2.1-3 00:12:42.772 http://cunit.sourceforge.net/ 00:12:42.772 00:12:42.772 00:12:42.772 Suite: bdevio tests on: Nvme3n1 00:12:42.772 Test: blockdev write read block ...passed 00:12:42.772 Test: blockdev write zeroes read block ...passed 00:12:42.772 Test: blockdev write zeroes read no split ...passed 00:12:42.772 Test: blockdev write zeroes read split ...passed 00:12:43.030 Test: blockdev write zeroes read split partial ...passed 00:12:43.030 Test: blockdev reset ...[2024-11-25 10:18:37.123004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:43.030 passed 00:12:43.030 Test: blockdev write read 8 blocks ...[2024-11-25 10:18:37.126885] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:12:43.030 passed 00:12:43.030 Test: blockdev write read size > 128k ...passed 00:12:43.030 Test: blockdev write read invalid size ...passed 00:12:43.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.030 Test: blockdev write read max offset ...passed 00:12:43.030 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.030 Test: blockdev writev readv 8 blocks ...passed 00:12:43.030 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.030 Test: blockdev writev readv block ...passed 00:12:43.030 Test: blockdev writev readv size > 128k ...passed 00:12:43.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.030 Test: blockdev comparev and writev ...[2024-11-25 10:18:37.134388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7604000 len:0x1000 00:12:43.030 [2024-11-25 10:18:37.134446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:43.030 passed 00:12:43.030 Test: blockdev nvme passthru rw ...passed 00:12:43.030 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.030 Test: blockdev nvme admin passthru ...[2024-11-25 10:18:37.135257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:43.030 [2024-11-25 10:18:37.135300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:43.030 passed 00:12:43.030 Test: blockdev copy ...passed 00:12:43.030 Suite: bdevio tests on: Nvme2n3 00:12:43.030 Test: blockdev write read block ...passed 00:12:43.030 Test: blockdev write zeroes read block ...passed 00:12:43.030 Test: blockdev write zeroes read no split ...passed 00:12:43.030 Test: blockdev write zeroes read split ...passed 00:12:43.030 Test: blockdev write zeroes read split partial ...passed 00:12:43.030 Test: blockdev reset ...[2024-11-25 10:18:37.203549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:43.030 passed 00:12:43.030 Test: blockdev write read 8 blocks ...[2024-11-25 10:18:37.207942] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:43.030 passed 00:12:43.030 Test: blockdev write read size > 128k ...passed 00:12:43.030 Test: blockdev write read invalid size ...passed 00:12:43.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.030 Test: blockdev write read max offset ...passed 00:12:43.030 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.030 Test: blockdev writev readv 8 blocks ...passed 00:12:43.030 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.030 Test: blockdev writev readv block ...passed 00:12:43.030 Test: blockdev writev readv size > 128k ...passed 00:12:43.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.030 Test: blockdev comparev and writev ...[2024-11-25 10:18:37.215302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7602000 len:0x1000 00:12:43.030 [2024-11-25 10:18:37.215357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:43.030 passed 00:12:43.030 Test: blockdev nvme passthru rw ...passed 00:12:43.030 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.030 Test: blockdev nvme admin passthru ...[2024-11-25 10:18:37.216221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:43.030 [2024-11-25 10:18:37.216267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:43.030 passed 00:12:43.030 Test: blockdev copy ...passed 00:12:43.030 Suite: bdevio tests on: Nvme2n2 00:12:43.030 Test: blockdev write read block ...passed 00:12:43.030 Test: blockdev write zeroes read block ...passed 00:12:43.030 Test: blockdev write zeroes read no split ...passed 00:12:43.030 Test: blockdev write zeroes read split ...passed 00:12:43.030 Test: blockdev write zeroes read split partial ...passed 00:12:43.030 Test: blockdev reset ...[2024-11-25 10:18:37.285142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:43.030 passed 00:12:43.030 Test: blockdev write read 8 blocks ...[2024-11-25 10:18:37.289009] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:43.030 passed 00:12:43.030 Test: blockdev write read size > 128k ...passed 00:12:43.030 Test: blockdev write read invalid size ...passed 00:12:43.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.030 Test: blockdev write read max offset ...passed 00:12:43.030 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.030 Test: blockdev writev readv 8 blocks ...passed 00:12:43.030 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.030 Test: blockdev writev readv block ...passed 00:12:43.030 Test: blockdev writev readv size > 128k ...passed 00:12:43.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.030 Test: blockdev comparev and writev ...passed 00:12:43.031 Test: blockdev nvme passthru rw ...[2024-11-25 10:18:37.296352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9c38000 len:0x1000 00:12:43.031 [2024-11-25 10:18:37.296401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:43.031 passed 00:12:43.031 Test: blockdev nvme passthru vendor specific ...[2024-11-25 10:18:37.297253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:43.031 [2024-11-25 10:18:37.297291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:43.031 passed 00:12:43.031 Test: blockdev nvme admin passthru ...passed 00:12:43.031 Test: blockdev copy ...passed 00:12:43.031 Suite: bdevio tests on: Nvme2n1 00:12:43.031 Test: blockdev write read block ...passed 00:12:43.031 Test: blockdev write zeroes read block ...passed 00:12:43.031 Test: blockdev write zeroes read no split ...passed 00:12:43.031 Test: blockdev write zeroes read split ...passed 00:12:43.293 Test: blockdev write zeroes read split partial ...passed 00:12:43.293 Test: blockdev reset ...[2024-11-25 10:18:37.367764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:43.293 passed 00:12:43.293 Test: blockdev write read 8 blocks ...[2024-11-25 10:18:37.372192] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:43.293 passed 00:12:43.293 Test: blockdev write read size > 128k ...passed 00:12:43.293 Test: blockdev write read invalid size ...passed 00:12:43.293 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.293 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.293 Test: blockdev write read max offset ...passed 00:12:43.293 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.293 Test: blockdev writev readv 8 blocks ...passed 00:12:43.293 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.293 Test: blockdev writev readv block ...passed 00:12:43.293 Test: blockdev writev readv size > 128k ...passed 00:12:43.294 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.294 Test: blockdev comparev and writev ...[2024-11-25 10:18:37.380497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9c34000 len:0x1000 00:12:43.294 [2024-11-25 10:18:37.380561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:43.294 passed 00:12:43.294 Test: blockdev nvme passthru rw ...passed 00:12:43.294 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.294 Test: blockdev nvme admin passthru ...[2024-11-25 10:18:37.381506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:43.294 [2024-11-25 10:18:37.381547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:43.294 passed 00:12:43.294 Test: blockdev copy ...passed 00:12:43.294 Suite: bdevio tests on: Nvme1n1p2 00:12:43.294 Test: blockdev write read block ...passed 00:12:43.294 Test: blockdev write zeroes read block ...passed 00:12:43.294 Test: blockdev write zeroes read no split ...passed 00:12:43.294 Test: blockdev write zeroes read split ...passed 00:12:43.294 Test: blockdev write zeroes read split partial ...passed 00:12:43.294 Test: blockdev reset ...[2024-11-25 10:18:37.448945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:43.294 passed 00:12:43.294 Test: blockdev write read 8 blocks ...[2024-11-25 10:18:37.452748] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:43.294 passed 00:12:43.294 Test: blockdev write read size > 128k ...passed 00:12:43.294 Test: blockdev write read invalid size ...passed 00:12:43.294 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.294 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.294 Test: blockdev write read max offset ...passed 00:12:43.294 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.294 Test: blockdev writev readv 8 blocks ...passed 00:12:43.294 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.294 Test: blockdev writev readv block ...passed 00:12:43.294 Test: blockdev writev readv size > 128k ...passed 00:12:43.294 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.294 Test: blockdev comparev and writev ...[2024-11-25 10:18:37.461228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d9c30000 len:0x1000 00:12:43.294 [2024-11-25 10:18:37.461279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:43.294 passed 00:12:43.294 Test: blockdev nvme passthru rw ...passed 00:12:43.294 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.294 Test: blockdev nvme admin passthru ...passed 00:12:43.294 Test: blockdev copy ...passed 00:12:43.294 Suite: bdevio tests on: Nvme1n1p1 00:12:43.294 Test: blockdev write read block ...passed 00:12:43.294 Test: blockdev write zeroes read block ...passed 00:12:43.294 Test: blockdev write zeroes read no split ...passed 00:12:43.294 Test: blockdev write zeroes read split ...passed 00:12:43.294 Test: blockdev write zeroes read split partial ...passed 00:12:43.294 Test: blockdev reset ...[2024-11-25 10:18:37.518153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:43.294 passed 00:12:43.294 Test: blockdev write read 8 blocks ...[2024-11-25 10:18:37.521868] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:43.294 passed 00:12:43.294 Test: blockdev write read size > 128k ...passed 00:12:43.294 Test: blockdev write read invalid size ...passed 00:12:43.294 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.294 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.294 Test: blockdev write read max offset ...passed 00:12:43.294 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.294 Test: blockdev writev readv 8 blocks ...passed 00:12:43.294 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.294 Test: blockdev writev readv block ...passed 00:12:43.294 Test: blockdev writev readv size > 128k ...passed 00:12:43.294 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.294 Test: blockdev comparev and writev ...[2024-11-25 10:18:37.530464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c780e000 len:0x1000 00:12:43.294 [2024-11-25 10:18:37.530527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:43.294 passed 00:12:43.294 Test: blockdev nvme passthru rw ...passed 00:12:43.294 Test: blockdev nvme passthru vendor specific ...passed 00:12:43.294 Test: blockdev nvme admin passthru ...passed 00:12:43.294 Test: blockdev copy ...passed 00:12:43.294 Suite: bdevio tests on: Nvme0n1 00:12:43.294 Test: blockdev write read block ...passed 00:12:43.294 Test: blockdev write zeroes read block ...passed 00:12:43.294 Test: blockdev write zeroes read no split ...passed 00:12:43.294 Test: blockdev write zeroes read split ...passed 00:12:43.294 Test: blockdev write zeroes read split partial ...passed 00:12:43.294 Test: blockdev reset ...[2024-11-25 10:18:37.590587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:43.294 [2024-11-25 10:18:37.594382] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:43.294 passed 00:12:43.294 Test: blockdev write read 8 blocks ...passed 00:12:43.294 Test: blockdev write read size > 128k ...passed 00:12:43.294 Test: blockdev write read invalid size ...passed 00:12:43.294 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:43.294 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:43.294 Test: blockdev write read max offset ...passed 00:12:43.294 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:43.294 Test: blockdev writev readv 8 blocks ...passed 00:12:43.294 Test: blockdev writev readv 30 x 1block ...passed 00:12:43.294 Test: blockdev writev readv block ...passed 00:12:43.294 Test: blockdev writev readv size > 128k ...passed 00:12:43.294 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:43.294 Test: blockdev comparev and writev ...passed 00:12:43.294 Test: blockdev nvme passthru rw ...[2024-11-25 10:18:37.601655] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:43.294 separate metadata which is not supported yet. 00:12:43.294 passed 00:12:43.294 Test: blockdev nvme passthru vendor specific ...[2024-11-25 10:18:37.602304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:43.294 [2024-11-25 10:18:37.602353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:43.294 passed 00:12:43.294 Test: blockdev nvme admin passthru ...passed 00:12:43.294 Test: blockdev copy ...passed 00:12:43.294 00:12:43.294 Run Summary: Type Total Ran Passed Failed Inactive 00:12:43.294 suites 7 7 n/a 0 0 00:12:43.294 tests 161 161 161 0 0 00:12:43.294 asserts 1025 1025 1025 0 n/a 00:12:43.294 00:12:43.294 Elapsed time = 1.479 seconds 00:12:43.294 0 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62776 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62776 ']' 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62776 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62776 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62776' 00:12:43.553 killing process with pid 62776 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62776 00:12:43.553 10:18:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62776 00:12:44.486 10:18:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:44.486 00:12:44.486 real 0m3.029s 00:12:44.486 user 0m7.722s 00:12:44.486 sys 0m0.491s 00:12:44.486 10:18:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.486 10:18:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:44.486 ************************************ 00:12:44.486 END TEST bdev_bounds 00:12:44.486 ************************************ 00:12:44.486 10:18:38 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:44.487 10:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:44.487 10:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.487 10:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:44.487 ************************************ 00:12:44.487 START TEST bdev_nbd 00:12:44.487 ************************************ 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62845 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62845 /var/tmp/spdk-nbd.sock 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62845 ']' 00:12:44.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.487 10:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:44.762 [2024-11-25 10:18:38.896516] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:12:44.762 [2024-11-25 10:18:38.897429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.019 [2024-11-25 10:18:39.085279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.020 [2024-11-25 10:18:39.237541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:45.954 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.213 1+0 records in 00:12:46.213 1+0 records out 00:12:46.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520664 s, 7.9 MB/s 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:46.213 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.471 1+0 records in 00:12:46.471 1+0 records out 00:12:46.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681515 s, 6.0 MB/s 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:46.471 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.730 1+0 records in 00:12:46.730 1+0 records out 00:12:46.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551962 s, 7.4 MB/s 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:46.730 10:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.988 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.989 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.989 1+0 records in 00:12:46.989 1+0 records out 00:12:46.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536576 s, 7.6 MB/s 00:12:46.989 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.989 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:46.989 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.989 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.989 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:46.989 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:46.989 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:46.989 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:47.247 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.248 1+0 records in 00:12:47.248 1+0 records out 00:12:47.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679021 s, 6.0 MB/s 00:12:47.248 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.506 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:47.506 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.506 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:47.506 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:47.506 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:47.506 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:47.506 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.765 1+0 records in 00:12:47.765 1+0 records out 00:12:47.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00074745 s, 5.5 MB/s 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:47.765 10:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.024 1+0 records in 00:12:48.024 1+0 records out 00:12:48.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651362 s, 6.3 MB/s 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:48.024 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:48.282 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd0", 00:12:48.282 "bdev_name": "Nvme0n1" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd1", 00:12:48.282 "bdev_name": "Nvme1n1p1" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd2", 00:12:48.282 "bdev_name": "Nvme1n1p2" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd3", 00:12:48.282 "bdev_name": "Nvme2n1" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd4", 00:12:48.282 "bdev_name": "Nvme2n2" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd5", 00:12:48.282 "bdev_name": "Nvme2n3" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd6", 00:12:48.282 "bdev_name": "Nvme3n1" 00:12:48.282 } 00:12:48.282 ]' 00:12:48.282 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:48.282 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd0", 00:12:48.282 "bdev_name": "Nvme0n1" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd1", 00:12:48.282 "bdev_name": "Nvme1n1p1" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd2", 00:12:48.282 "bdev_name": "Nvme1n1p2" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd3", 00:12:48.282 "bdev_name": "Nvme2n1" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd4", 00:12:48.282 "bdev_name": "Nvme2n2" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd5", 00:12:48.282 "bdev_name": "Nvme2n3" 00:12:48.282 }, 00:12:48.282 { 00:12:48.282 "nbd_device": "/dev/nbd6", 00:12:48.282 "bdev_name": "Nvme3n1" 00:12:48.282 } 00:12:48.282 ]' 00:12:48.282 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:48.282 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:12:48.283 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.283 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:12:48.283 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.283 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:48.283 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.283 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:48.542 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.542 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.542 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.542 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.542 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.542 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.801 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:48.801 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.801 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.801 10:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.060 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.318 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.578 10:18:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.838 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.096 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.355 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:50.922 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:50.922 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:50.922 10:18:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:50.922 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:51.181 /dev/nbd0 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.181 1+0 records in 00:12:51.181 1+0 records out 00:12:51.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056674 s, 7.2 MB/s 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:51.181 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:12:51.440 /dev/nbd1 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.440 1+0 records in 00:12:51.440 1+0 records out 00:12:51.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471198 s, 8.7 MB/s 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:51.440 10:18:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:12:51.699 /dev/nbd10 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.957 1+0 records in 00:12:51.957 1+0 records out 00:12:51.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647749 s, 6.3 MB/s 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:51.957 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:12:52.216 /dev/nbd11 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.216 1+0 records in 00:12:52.216 1+0 records out 00:12:52.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664366 s, 6.2 MB/s 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:52.216 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:12:52.474 /dev/nbd12 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.474 1+0 records in 00:12:52.474 1+0 records out 00:12:52.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00085008 s, 4.8 MB/s 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:52.474 10:18:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:12:53.039 /dev/nbd13 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.040 1+0 records in 00:12:53.040 1+0 records out 00:12:53.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775765 s, 5.3 MB/s 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:53.040 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:12:53.298 /dev/nbd14 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.298 1+0 records in 00:12:53.298 1+0 records out 00:12:53.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796128 s, 5.1 MB/s 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.298 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:53.557 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd0", 00:12:53.557 "bdev_name": "Nvme0n1" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd1", 00:12:53.557 "bdev_name": "Nvme1n1p1" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd10", 00:12:53.557 "bdev_name": "Nvme1n1p2" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd11", 00:12:53.557 "bdev_name": "Nvme2n1" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd12", 00:12:53.557 "bdev_name": "Nvme2n2" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd13", 00:12:53.557 "bdev_name": "Nvme2n3" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd14", 00:12:53.557 "bdev_name": "Nvme3n1" 00:12:53.557 } 00:12:53.557 ]' 00:12:53.557 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd0", 00:12:53.557 "bdev_name": "Nvme0n1" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd1", 00:12:53.557 "bdev_name": "Nvme1n1p1" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd10", 00:12:53.557 "bdev_name": "Nvme1n1p2" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd11", 00:12:53.557 "bdev_name": "Nvme2n1" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd12", 00:12:53.557 "bdev_name": "Nvme2n2" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd13", 00:12:53.557 "bdev_name": "Nvme2n3" 00:12:53.557 }, 00:12:53.557 { 00:12:53.557 "nbd_device": "/dev/nbd14", 00:12:53.557 "bdev_name": "Nvme3n1" 00:12:53.557 } 00:12:53.557 ]' 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:53.558 /dev/nbd1 00:12:53.558 /dev/nbd10 00:12:53.558 /dev/nbd11 00:12:53.558 /dev/nbd12 00:12:53.558 /dev/nbd13 00:12:53.558 /dev/nbd14' 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:53.558 /dev/nbd1 00:12:53.558 /dev/nbd10 00:12:53.558 /dev/nbd11 00:12:53.558 /dev/nbd12 00:12:53.558 /dev/nbd13 00:12:53.558 /dev/nbd14' 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:53.558 256+0 records in 00:12:53.558 256+0 records out 00:12:53.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536358 s, 195 MB/s 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:53.558 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:53.817 256+0 records in 00:12:53.817 256+0 records out 00:12:53.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166634 s, 6.3 MB/s 00:12:53.817 10:18:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:53.817 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:54.075 256+0 records in 00:12:54.075 256+0 records out 00:12:54.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179365 s, 5.8 MB/s 00:12:54.076 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.076 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:54.076 256+0 records in 00:12:54.076 256+0 records out 00:12:54.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185966 s, 5.6 MB/s 00:12:54.076 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.076 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:54.334 256+0 records in 00:12:54.334 256+0 records out 00:12:54.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190111 s, 5.5 MB/s 00:12:54.334 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.334 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:54.593 256+0 records in 00:12:54.593 256+0 records out 00:12:54.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172492 s, 6.1 MB/s 00:12:54.593 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.593 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:54.593 256+0 records in 00:12:54.593 256+0 records out 00:12:54.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160038 s, 6.6 MB/s 00:12:54.593 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.593 10:18:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:54.852 256+0 records in 00:12:54.852 256+0 records out 00:12:54.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156658 s, 6.7 MB/s 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.852 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.419 10:18:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.985 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.243 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.501 10:18:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.760 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.327 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:57.586 10:18:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:57.844 malloc_lvol_verify 00:12:57.844 10:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:58.102 4e1906c3-bb03-4955-93e1-46ab292abdf2 00:12:58.102 10:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:58.361 c660ae4f-e16a-4cf5-a9e9-e1388d372bd4 00:12:58.361 10:18:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:58.927 /dev/nbd0 00:12:58.927 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:58.927 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:58.927 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:58.927 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:58.927 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:58.927 mke2fs 1.47.0 (5-Feb-2023) 00:12:58.927 Discarding device blocks: 0/4096 done 00:12:58.927 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:58.927 00:12:58.927 Allocating group tables: 0/1 done 00:12:58.927 Writing inode tables: 0/1 done 00:12:58.927 Creating journal (1024 blocks): done 00:12:58.927 Writing superblocks and filesystem accounting information: 0/1 done 00:12:58.928 00:12:58.928 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:58.928 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.928 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:58.928 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.928 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:58.928 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.928 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62845 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62845 ']' 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62845 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62845 00:12:59.186 killing process with pid 62845 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62845' 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62845 00:12:59.186 10:18:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62845 00:13:00.562 10:18:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:00.562 00:13:00.562 real 0m15.786s 00:13:00.562 user 0m22.847s 00:13:00.562 sys 0m5.005s 00:13:00.562 10:18:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.562 10:18:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:00.562 ************************************ 00:13:00.562 END TEST bdev_nbd 00:13:00.562 ************************************ 00:13:00.562 10:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:00.562 10:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:13:00.562 10:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:13:00.562 skipping fio tests on NVMe due to multi-ns failures. 00:13:00.562 10:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:00.562 10:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:00.562 10:18:54 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:00.562 10:18:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:00.562 10:18:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.562 10:18:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:00.562 ************************************ 00:13:00.562 START TEST bdev_verify 00:13:00.562 ************************************ 00:13:00.562 10:18:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:00.562 [2024-11-25 10:18:54.729275] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:13:00.562 [2024-11-25 10:18:54.729470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63301 ] 00:13:00.820 [2024-11-25 10:18:54.911382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:00.820 [2024-11-25 10:18:55.056730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.820 [2024-11-25 10:18:55.056741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.755 Running I/O for 5 seconds... 00:13:04.067 19968.00 IOPS, 78.00 MiB/s [2024-11-25T10:18:59.334Z] 20032.00 IOPS, 78.25 MiB/s [2024-11-25T10:19:00.268Z] 19477.33 IOPS, 76.08 MiB/s [2024-11-25T10:19:01.203Z] 19536.00 IOPS, 76.31 MiB/s [2024-11-25T10:19:01.203Z] 19558.40 IOPS, 76.40 MiB/s 00:13:06.870 Latency(us) 00:13:06.870 [2024-11-25T10:19:01.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.870 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0x0 length 0xbd0bd 00:13:06.870 Nvme0n1 : 5.08 1372.99 5.36 0.00 0.00 92694.77 12392.26 87699.08 00:13:06.870 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:06.870 Nvme0n1 : 5.09 1382.29 5.40 0.00 0.00 92356.15 19184.17 88652.33 00:13:06.870 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0x0 length 0x4ff80 00:13:06.870 Nvme1n1p1 : 5.08 1372.00 5.36 0.00 0.00 92593.37 14417.92 83409.45 00:13:06.870 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0x4ff80 length 0x4ff80 00:13:06.870 Nvme1n1p1 : 5.09 1381.85 5.40 0.00 0.00 92146.92 19184.17 74830.20 00:13:06.870 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0x0 length 0x4ff7f 00:13:06.870 Nvme1n1p2 : 5.10 1380.16 5.39 0.00 0.00 92192.51 12749.73 81502.95 00:13:06.870 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:13:06.870 Nvme1n1p2 : 5.10 1380.84 5.39 0.00 0.00 92008.01 19899.11 70063.94 00:13:06.870 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0x0 length 0x80000 00:13:06.870 Nvme2n1 : 5.10 1379.12 5.39 0.00 0.00 92063.94 15132.86 77689.95 00:13:06.870 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0x80000 length 0x80000 00:13:06.870 Nvme2n1 : 5.10 1380.39 5.39 0.00 0.00 91848.80 20375.74 68634.07 00:13:06.870 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0x0 length 0x80000 00:13:06.870 Nvme2n2 : 5.11 1378.50 5.38 0.00 0.00 91913.50 15847.80 81026.33 00:13:06.870 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:06.870 Verification LBA range: start 0x80000 length 0x80000 00:13:06.871 Nvme2n2 : 5.10 1379.94 5.39 0.00 0.00 91702.66 20256.58 71493.82 00:13:06.871 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:06.871 Verification LBA range: start 0x0 length 0x80000 00:13:06.871 Nvme2n3 : 5.11 1378.09 5.38 0.00 0.00 91762.72 15847.80 83886.08 00:13:06.871 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:06.871 Verification LBA range: start 0x80000 length 0x80000 00:13:06.871 Nvme2n3 : 5.10 1379.42 5.39 0.00 0.00 91538.88 18230.92 72923.69 00:13:06.871 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:06.871 Verification LBA range: start 0x0 length 0x20000 00:13:06.871 Nvme3n1 : 5.11 1377.64 5.38 0.00 0.00 91596.38 15073.28 86745.83 00:13:06.871 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:06.871 Verification LBA range: start 0x20000 length 0x20000 00:13:06.871 Nvme3n1 : 5.11 1378.98 5.39 0.00 0.00 91392.64 13047.62 74830.20 00:13:06.871 [2024-11-25T10:19:01.204Z] =================================================================================================================== 00:13:06.871 [2024-11-25T10:19:01.204Z] Total : 19302.22 75.40 0.00 0.00 91985.66 12392.26 88652.33 00:13:08.255 00:13:08.255 real 0m7.818s 00:13:08.255 user 0m14.211s 00:13:08.255 sys 0m0.403s 00:13:08.255 10:19:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.255 10:19:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:08.255 ************************************ 00:13:08.255 END TEST bdev_verify 00:13:08.255 ************************************ 00:13:08.255 10:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:08.255 10:19:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:08.255 10:19:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.255 10:19:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:08.255 ************************************ 00:13:08.255 START TEST bdev_verify_big_io 00:13:08.255 ************************************ 00:13:08.255 10:19:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:08.514 [2024-11-25 10:19:02.640091] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:13:08.514 [2024-11-25 10:19:02.640342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63410 ] 00:13:08.772 [2024-11-25 10:19:02.866454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:08.772 [2024-11-25 10:19:03.019305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.772 [2024-11-25 10:19:03.019319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.707 Running I/O for 5 seconds... 00:13:14.463 2536.00 IOPS, 158.50 MiB/s [2024-11-25T10:19:09.730Z] 2844.50 IOPS, 177.78 MiB/s [2024-11-25T10:19:09.988Z] 2467.00 IOPS, 154.19 MiB/s [2024-11-25T10:19:09.988Z] 2590.00 IOPS, 161.88 MiB/s 00:13:15.655 Latency(us) 00:13:15.655 [2024-11-25T10:19:09.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.655 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x0 length 0xbd0b 00:13:15.655 Nvme0n1 : 5.80 126.24 7.89 0.00 0.00 976256.14 29908.25 1082893.03 00:13:15.655 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:15.655 Nvme0n1 : 5.82 120.20 7.51 0.00 0.00 1026188.99 17039.36 1311673.25 00:13:15.655 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x0 length 0x4ff8 00:13:15.655 Nvme1n1p1 : 5.80 128.57 8.04 0.00 0.00 936006.14 85792.58 1082893.03 00:13:15.655 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x4ff8 length 0x4ff8 00:13:15.655 Nvme1n1p1 : 5.79 117.26 7.33 0.00 0.00 1023297.30 81502.95 1060015.01 00:13:15.655 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x0 length 0x4ff7 00:13:15.655 Nvme1n1p2 : 5.81 132.27 8.27 0.00 0.00 900799.15 40751.48 899868.86 00:13:15.655 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x4ff7 length 0x4ff7 00:13:15.655 Nvme1n1p2 : 5.83 118.03 7.38 0.00 0.00 1003247.91 33363.78 1334551.27 00:13:15.655 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x0 length 0x8000 00:13:15.655 Nvme2n1 : 5.81 132.19 8.26 0.00 0.00 881992.30 40989.79 903681.86 00:13:15.655 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x8000 length 0x8000 00:13:15.655 Nvme2n1 : 5.87 130.79 8.17 0.00 0.00 879077.31 16801.05 892242.85 00:13:15.655 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x0 length 0x8000 00:13:15.655 Nvme2n2 : 5.82 137.25 8.58 0.00 0.00 834779.81 7923.90 1090519.04 00:13:15.655 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x8000 length 0x8000 00:13:15.655 Nvme2n2 : 5.87 123.78 7.74 0.00 0.00 900993.55 16920.20 1731103.65 00:13:15.655 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x0 length 0x8000 00:13:15.655 Nvme2n3 : 5.83 137.34 8.58 0.00 0.00 814861.14 7268.54 922746.88 00:13:15.655 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x8000 length 0x8000 00:13:15.655 Nvme2n3 : 5.90 127.83 7.99 0.00 0.00 847298.15 22401.40 1517575.45 00:13:15.655 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x0 length 0x2000 00:13:15.655 Nvme3n1 : 5.84 142.41 8.90 0.00 0.00 768369.04 5838.66 1105771.05 00:13:15.655 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.655 Verification LBA range: start 0x2000 length 0x2000 00:13:15.655 Nvme3n1 : 5.94 151.92 9.49 0.00 0.00 705435.50 897.40 1555705.48 00:13:15.655 [2024-11-25T10:19:09.988Z] =================================================================================================================== 00:13:15.655 [2024-11-25T10:19:09.988Z] Total : 1826.07 114.13 0.00 0.00 886152.84 897.40 1731103.65 00:13:17.554 00:13:17.554 real 0m9.321s 00:13:17.554 user 0m17.033s 00:13:17.554 sys 0m0.519s 00:13:17.554 10:19:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.554 10:19:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.554 ************************************ 00:13:17.554 END TEST bdev_verify_big_io 00:13:17.554 ************************************ 00:13:17.554 10:19:11 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:17.554 10:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:17.554 10:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.554 10:19:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:17.554 ************************************ 00:13:17.554 START TEST bdev_write_zeroes 00:13:17.554 ************************************ 00:13:17.554 10:19:11 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:17.812 [2024-11-25 10:19:11.998990] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:13:17.812 [2024-11-25 10:19:11.999201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63525 ] 00:13:18.070 [2024-11-25 10:19:12.192545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.070 [2024-11-25 10:19:12.351598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.002 Running I/O for 1 seconds... 00:13:19.934 49216.00 IOPS, 192.25 MiB/s 00:13:19.934 Latency(us) 00:13:19.934 [2024-11-25T10:19:14.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.934 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:19.934 Nvme0n1 : 1.03 7049.94 27.54 0.00 0.00 18099.42 8757.99 29550.78 00:13:19.934 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:19.934 Nvme1n1p1 : 1.03 7040.41 27.50 0.00 0.00 18088.56 14417.92 30980.65 00:13:19.934 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:19.934 Nvme1n1p2 : 1.03 7031.42 27.47 0.00 0.00 18046.90 13941.29 28240.06 00:13:19.934 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:19.934 Nvme2n1 : 1.03 7056.83 27.57 0.00 0.00 17899.92 9949.56 25976.09 00:13:19.934 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:19.934 Nvme2n2 : 1.03 7020.50 27.42 0.00 0.00 17936.39 11260.28 25618.62 00:13:19.934 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:19.934 Nvme2n3 : 1.03 7011.78 27.39 0.00 0.00 17906.05 10247.45 25856.93 00:13:19.934 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:19.934 Nvme3n1 : 1.03 6941.59 27.12 0.00 0.00 18039.29 11141.12 28597.53 00:13:19.934 [2024-11-25T10:19:14.267Z] =================================================================================================================== 00:13:19.934 [2024-11-25T10:19:14.267Z] Total : 49152.48 192.00 0.00 0.00 18002.18 8757.99 30980.65 00:13:21.308 00:13:21.308 real 0m3.468s 00:13:21.308 user 0m2.958s 00:13:21.308 sys 0m0.386s 00:13:21.308 10:19:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.308 10:19:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:21.308 ************************************ 00:13:21.308 END TEST bdev_write_zeroes 00:13:21.308 ************************************ 00:13:21.308 10:19:15 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:21.308 10:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:21.308 10:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.308 10:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:21.308 ************************************ 00:13:21.308 START TEST bdev_json_nonenclosed 00:13:21.308 ************************************ 00:13:21.308 10:19:15 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:21.308 [2024-11-25 10:19:15.526672] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:13:21.308 [2024-11-25 10:19:15.526911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63583 ] 00:13:21.566 [2024-11-25 10:19:15.721576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.566 [2024-11-25 10:19:15.867155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.566 [2024-11-25 10:19:15.867345] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:21.566 [2024-11-25 10:19:15.867375] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:21.566 [2024-11-25 10:19:15.867390] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:22.134 00:13:22.134 real 0m0.757s 00:13:22.134 user 0m0.462s 00:13:22.134 sys 0m0.188s 00:13:22.134 10:19:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.134 10:19:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:22.134 ************************************ 00:13:22.134 END TEST bdev_json_nonenclosed 00:13:22.134 ************************************ 00:13:22.134 10:19:16 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.134 10:19:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:22.134 10:19:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.134 10:19:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:22.134 ************************************ 00:13:22.134 START TEST bdev_json_nonarray 00:13:22.134 ************************************ 00:13:22.134 10:19:16 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.134 [2024-11-25 10:19:16.329628] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:13:22.134 [2024-11-25 10:19:16.329839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63609 ] 00:13:22.393 [2024-11-25 10:19:16.519160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.393 [2024-11-25 10:19:16.666589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.393 [2024-11-25 10:19:16.666765] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:22.393 [2024-11-25 10:19:16.666839] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:22.393 [2024-11-25 10:19:16.666854] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:22.652 00:13:22.652 real 0m0.740s 00:13:22.652 user 0m0.478s 00:13:22.652 sys 0m0.156s 00:13:22.652 10:19:16 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.652 10:19:16 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:22.652 ************************************ 00:13:22.652 END TEST bdev_json_nonarray 00:13:22.652 ************************************ 00:13:22.912 10:19:17 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:13:22.912 10:19:17 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:13:22.912 10:19:17 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:13:22.912 10:19:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:22.912 10:19:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.912 10:19:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:22.912 ************************************ 00:13:22.912 START TEST bdev_gpt_uuid 00:13:22.912 ************************************ 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63640 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63640 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63640 ']' 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.912 10:19:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:22.912 [2024-11-25 10:19:17.141343] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:13:22.912 [2024-11-25 10:19:17.141618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63640 ] 00:13:23.171 [2024-11-25 10:19:17.317303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.171 [2024-11-25 10:19:17.464862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.106 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.106 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:13:24.106 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:24.106 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.106 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:24.670 Some configs were skipped because the RPC state that can call them passed over. 00:13:24.670 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.670 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:13:24.670 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.670 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:24.670 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.670 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:13:24.671 { 00:13:24.671 "name": "Nvme1n1p1", 00:13:24.671 "aliases": [ 00:13:24.671 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:13:24.671 ], 00:13:24.671 "product_name": "GPT Disk", 00:13:24.671 "block_size": 4096, 00:13:24.671 "num_blocks": 655104, 00:13:24.671 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:24.671 "assigned_rate_limits": { 00:13:24.671 "rw_ios_per_sec": 0, 00:13:24.671 "rw_mbytes_per_sec": 0, 00:13:24.671 "r_mbytes_per_sec": 0, 00:13:24.671 "w_mbytes_per_sec": 0 00:13:24.671 }, 00:13:24.671 "claimed": false, 00:13:24.671 "zoned": false, 00:13:24.671 "supported_io_types": { 00:13:24.671 "read": true, 00:13:24.671 "write": true, 00:13:24.671 "unmap": true, 00:13:24.671 "flush": true, 00:13:24.671 "reset": true, 00:13:24.671 "nvme_admin": false, 00:13:24.671 "nvme_io": false, 00:13:24.671 "nvme_io_md": false, 00:13:24.671 "write_zeroes": true, 00:13:24.671 "zcopy": false, 00:13:24.671 "get_zone_info": false, 00:13:24.671 "zone_management": false, 00:13:24.671 "zone_append": false, 00:13:24.671 "compare": true, 00:13:24.671 "compare_and_write": false, 00:13:24.671 "abort": true, 00:13:24.671 "seek_hole": false, 00:13:24.671 "seek_data": false, 00:13:24.671 "copy": true, 00:13:24.671 "nvme_iov_md": false 00:13:24.671 }, 00:13:24.671 "driver_specific": { 00:13:24.671 "gpt": { 00:13:24.671 "base_bdev": "Nvme1n1", 00:13:24.671 "offset_blocks": 256, 00:13:24.671 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:13:24.671 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:24.671 "partition_name": "SPDK_TEST_first" 00:13:24.671 } 00:13:24.671 } 00:13:24.671 } 00:13:24.671 ]' 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:13:24.671 { 00:13:24.671 "name": "Nvme1n1p2", 00:13:24.671 "aliases": [ 00:13:24.671 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:13:24.671 ], 00:13:24.671 "product_name": "GPT Disk", 00:13:24.671 "block_size": 4096, 00:13:24.671 "num_blocks": 655103, 00:13:24.671 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:24.671 "assigned_rate_limits": { 00:13:24.671 "rw_ios_per_sec": 0, 00:13:24.671 "rw_mbytes_per_sec": 0, 00:13:24.671 "r_mbytes_per_sec": 0, 00:13:24.671 "w_mbytes_per_sec": 0 00:13:24.671 }, 00:13:24.671 "claimed": false, 00:13:24.671 "zoned": false, 00:13:24.671 "supported_io_types": { 00:13:24.671 "read": true, 00:13:24.671 "write": true, 00:13:24.671 "unmap": true, 00:13:24.671 "flush": true, 00:13:24.671 "reset": true, 00:13:24.671 "nvme_admin": false, 00:13:24.671 "nvme_io": false, 00:13:24.671 "nvme_io_md": false, 00:13:24.671 "write_zeroes": true, 00:13:24.671 "zcopy": false, 00:13:24.671 "get_zone_info": false, 00:13:24.671 "zone_management": false, 00:13:24.671 "zone_append": false, 00:13:24.671 "compare": true, 00:13:24.671 "compare_and_write": false, 00:13:24.671 "abort": true, 00:13:24.671 "seek_hole": false, 00:13:24.671 "seek_data": false, 00:13:24.671 "copy": true, 00:13:24.671 "nvme_iov_md": false 00:13:24.671 }, 00:13:24.671 "driver_specific": { 00:13:24.671 "gpt": { 00:13:24.671 "base_bdev": "Nvme1n1", 00:13:24.671 "offset_blocks": 655360, 00:13:24.671 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:13:24.671 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:24.671 "partition_name": "SPDK_TEST_second" 00:13:24.671 } 00:13:24.671 } 00:13:24.671 } 00:13:24.671 ]' 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:13:24.671 10:19:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63640 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63640 ']' 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63640 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63640 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.928 killing process with pid 63640 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63640' 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63640 00:13:24.928 10:19:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63640 00:13:27.456 00:13:27.456 real 0m4.404s 00:13:27.456 user 0m4.527s 00:13:27.456 sys 0m0.686s 00:13:27.456 10:19:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.456 10:19:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:27.456 ************************************ 00:13:27.456 END TEST bdev_gpt_uuid 00:13:27.456 ************************************ 00:13:27.456 10:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:13:27.456 10:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:27.456 10:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:13:27.456 10:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:27.456 10:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:27.456 10:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:13:27.456 10:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:13:27.456 10:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:13:27.456 10:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:27.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:27.973 Waiting for block devices as requested 00:13:27.973 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:27.973 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:27.973 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:28.249 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:33.530 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:33.530 10:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:13:33.530 10:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:13:33.530 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:33.530 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:33.530 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:33.530 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:33.530 10:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:13:33.530 00:13:33.530 real 1m8.429s 00:13:33.530 user 1m27.295s 00:13:33.530 sys 0m11.579s 00:13:33.530 10:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.530 ************************************ 00:13:33.530 END TEST blockdev_nvme_gpt 00:13:33.530 ************************************ 00:13:33.530 10:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:33.530 10:19:27 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:33.530 10:19:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.530 10:19:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.530 10:19:27 -- common/autotest_common.sh@10 -- # set +x 00:13:33.530 ************************************ 00:13:33.530 START TEST nvme 00:13:33.530 ************************************ 00:13:33.530 10:19:27 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:33.530 * Looking for test storage... 00:13:33.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:33.789 10:19:27 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:33.789 10:19:27 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:33.789 10:19:27 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:33.789 10:19:27 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:33.789 10:19:27 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.789 10:19:27 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.789 10:19:27 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.789 10:19:27 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.789 10:19:27 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.789 10:19:27 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.789 10:19:27 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.789 10:19:27 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.789 10:19:27 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.789 10:19:27 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.789 10:19:27 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.789 10:19:27 nvme -- scripts/common.sh@344 -- # case "$op" in 00:13:33.789 10:19:27 nvme -- scripts/common.sh@345 -- # : 1 00:13:33.789 10:19:27 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.789 10:19:27 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.789 10:19:27 nvme -- scripts/common.sh@365 -- # decimal 1 00:13:33.789 10:19:27 nvme -- scripts/common.sh@353 -- # local d=1 00:13:33.789 10:19:27 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.789 10:19:27 nvme -- scripts/common.sh@355 -- # echo 1 00:13:33.789 10:19:27 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.789 10:19:27 nvme -- scripts/common.sh@366 -- # decimal 2 00:13:33.789 10:19:27 nvme -- scripts/common.sh@353 -- # local d=2 00:13:33.789 10:19:27 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.789 10:19:27 nvme -- scripts/common.sh@355 -- # echo 2 00:13:33.789 10:19:27 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.789 10:19:27 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.789 10:19:27 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.789 10:19:27 nvme -- scripts/common.sh@368 -- # return 0 00:13:33.789 10:19:27 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.789 10:19:27 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.789 --rc genhtml_branch_coverage=1 00:13:33.789 --rc genhtml_function_coverage=1 00:13:33.789 --rc genhtml_legend=1 00:13:33.789 --rc geninfo_all_blocks=1 00:13:33.789 --rc geninfo_unexecuted_blocks=1 00:13:33.789 00:13:33.789 ' 00:13:33.789 10:19:27 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.789 --rc genhtml_branch_coverage=1 00:13:33.789 --rc genhtml_function_coverage=1 00:13:33.789 --rc genhtml_legend=1 00:13:33.789 --rc geninfo_all_blocks=1 00:13:33.789 --rc geninfo_unexecuted_blocks=1 00:13:33.789 00:13:33.789 ' 00:13:33.789 10:19:27 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.789 --rc genhtml_branch_coverage=1 00:13:33.789 --rc genhtml_function_coverage=1 00:13:33.789 --rc genhtml_legend=1 00:13:33.789 --rc geninfo_all_blocks=1 00:13:33.789 --rc geninfo_unexecuted_blocks=1 00:13:33.789 00:13:33.789 ' 00:13:33.789 10:19:27 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.789 --rc genhtml_branch_coverage=1 00:13:33.789 --rc genhtml_function_coverage=1 00:13:33.789 --rc genhtml_legend=1 00:13:33.789 --rc geninfo_all_blocks=1 00:13:33.789 --rc geninfo_unexecuted_blocks=1 00:13:33.789 00:13:33.789 ' 00:13:33.789 10:19:27 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:34.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:34.920 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.920 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.920 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.920 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.219 10:19:29 nvme -- nvme/nvme.sh@79 -- # uname 00:13:35.219 10:19:29 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:13:35.219 10:19:29 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:13:35.219 10:19:29 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:13:35.219 10:19:29 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:13:35.219 10:19:29 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:13:35.219 10:19:29 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:13:35.219 10:19:29 nvme -- common/autotest_common.sh@1075 -- # stubpid=64294 00:13:35.219 10:19:29 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:13:35.219 Waiting for stub to ready for secondary processes... 00:13:35.219 10:19:29 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:13:35.219 10:19:29 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:35.219 10:19:29 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64294 ]] 00:13:35.219 10:19:29 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:13:35.219 [2024-11-25 10:19:29.375488] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:13:35.219 [2024-11-25 10:19:29.375689] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:13:36.154 10:19:30 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:36.154 10:19:30 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64294 ]] 00:13:36.154 10:19:30 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:13:37.088 [2024-11-25 10:19:31.209011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.088 10:19:31 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:37.088 10:19:31 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64294 ]] 00:13:37.088 10:19:31 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:13:37.088 [2024-11-25 10:19:31.377388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.088 [2024-11-25 10:19:31.377452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.088 [2024-11-25 10:19:31.377447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.088 [2024-11-25 10:19:31.402044] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:13:37.088 [2024-11-25 10:19:31.402117] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:37.088 [2024-11-25 10:19:31.415914] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:13:37.088 [2024-11-25 10:19:31.416060] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:13:37.088 [2024-11-25 10:19:31.419305] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:37.088 [2024-11-25 10:19:31.420361] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:13:37.347 [2024-11-25 10:19:31.420501] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:13:37.347 [2024-11-25 10:19:31.424013] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:37.347 [2024-11-25 10:19:31.424301] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:13:37.347 [2024-11-25 10:19:31.424440] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:13:37.347 [2024-11-25 10:19:31.428191] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:37.347 [2024-11-25 10:19:31.428477] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:13:37.347 [2024-11-25 10:19:31.428603] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:13:37.347 [2024-11-25 10:19:31.428704] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:13:37.347 [2024-11-25 10:19:31.428834] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:13:38.280 10:19:32 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:38.280 done. 00:13:38.280 10:19:32 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:13:38.280 10:19:32 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:38.280 10:19:32 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:13:38.280 10:19:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.280 10:19:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.280 ************************************ 00:13:38.280 START TEST nvme_reset 00:13:38.280 ************************************ 00:13:38.280 10:19:32 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:38.539 Initializing NVMe Controllers 00:13:38.539 Skipping QEMU NVMe SSD at 0000:00:10.0 00:13:38.539 Skipping QEMU NVMe SSD at 0000:00:11.0 00:13:38.539 Skipping QEMU NVMe SSD at 0000:00:13.0 00:13:38.539 Skipping QEMU NVMe SSD at 0000:00:12.0 00:13:38.539 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:13:38.539 00:13:38.539 real 0m0.409s 00:13:38.539 user 0m0.144s 00:13:38.539 sys 0m0.208s 00:13:38.539 10:19:32 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.539 ************************************ 00:13:38.539 END TEST nvme_reset 00:13:38.539 ************************************ 00:13:38.539 10:19:32 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:13:38.539 10:19:32 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:13:38.539 10:19:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.539 10:19:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.539 10:19:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.539 ************************************ 00:13:38.539 START TEST nvme_identify 00:13:38.539 ************************************ 00:13:38.539 10:19:32 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:13:38.539 10:19:32 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:13:38.539 10:19:32 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:13:38.539 10:19:32 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:13:38.539 10:19:32 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:13:38.539 10:19:32 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:38.539 10:19:32 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:13:38.539 10:19:32 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:38.539 10:19:32 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:38.539 10:19:32 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:38.797 10:19:32 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:38.797 10:19:32 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:38.797 10:19:32 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:13:39.058 [2024-11-25 10:19:33.233673] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64339 terminated unexpected 00:13:39.058 ===================================================== 00:13:39.058 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:39.058 ===================================================== 00:13:39.058 Controller Capabilities/Features 00:13:39.058 ================================ 00:13:39.058 Vendor ID: 1b36 00:13:39.058 Subsystem Vendor ID: 1af4 00:13:39.058 Serial Number: 12340 00:13:39.058 Model Number: QEMU NVMe Ctrl 00:13:39.058 Firmware Version: 8.0.0 00:13:39.058 Recommended Arb Burst: 6 00:13:39.058 IEEE OUI Identifier: 00 54 52 00:13:39.058 Multi-path I/O 00:13:39.058 May have multiple subsystem ports: No 00:13:39.058 May have multiple controllers: No 00:13:39.058 Associated with SR-IOV VF: No 00:13:39.058 Max Data Transfer Size: 524288 00:13:39.058 Max Number of Namespaces: 256 00:13:39.058 Max Number of I/O Queues: 64 00:13:39.058 NVMe Specification Version (VS): 1.4 00:13:39.058 NVMe Specification Version (Identify): 1.4 00:13:39.058 Maximum Queue Entries: 2048 00:13:39.058 Contiguous Queues Required: Yes 00:13:39.058 Arbitration Mechanisms Supported 00:13:39.058 Weighted Round Robin: Not Supported 00:13:39.058 Vendor Specific: Not Supported 00:13:39.058 Reset Timeout: 7500 ms 00:13:39.058 Doorbell Stride: 4 bytes 00:13:39.058 NVM Subsystem Reset: Not Supported 00:13:39.058 Command Sets Supported 00:13:39.058 NVM Command Set: Supported 00:13:39.058 Boot Partition: Not Supported 00:13:39.058 Memory Page Size Minimum: 4096 bytes 00:13:39.058 Memory Page Size Maximum: 65536 bytes 00:13:39.058 Persistent Memory Region: Not Supported 00:13:39.058 Optional Asynchronous Events Supported 00:13:39.058 Namespace Attribute Notices: Supported 00:13:39.058 Firmware Activation Notices: Not Supported 00:13:39.058 ANA Change Notices: Not Supported 00:13:39.058 PLE Aggregate Log Change Notices: Not Supported 00:13:39.058 LBA Status Info Alert Notices: Not Supported 00:13:39.058 EGE Aggregate Log Change Notices: Not Supported 00:13:39.058 Normal NVM Subsystem Shutdown event: Not Supported 00:13:39.058 Zone Descriptor Change Notices: Not Supported 00:13:39.058 Discovery Log Change Notices: Not Supported 00:13:39.058 Controller Attributes 00:13:39.058 128-bit Host Identifier: Not Supported 00:13:39.058 Non-Operational Permissive Mode: Not Supported 00:13:39.058 NVM Sets: Not Supported 00:13:39.058 Read Recovery Levels: Not Supported 00:13:39.058 Endurance Groups: Not Supported 00:13:39.058 Predictable Latency Mode: Not Supported 00:13:39.058 Traffic Based Keep ALive: Not Supported 00:13:39.058 Namespace Granularity: Not Supported 00:13:39.058 SQ Associations: Not Supported 00:13:39.058 UUID List: Not Supported 00:13:39.058 Multi-Domain Subsystem: Not Supported 00:13:39.058 Fixed Capacity Management: Not Supported 00:13:39.058 Variable Capacity Management: Not Supported 00:13:39.058 Delete Endurance Group: Not Supported 00:13:39.058 Delete NVM Set: Not Supported 00:13:39.058 Extended LBA Formats Supported: Supported 00:13:39.058 Flexible Data Placement Supported: Not Supported 00:13:39.059 00:13:39.059 Controller Memory Buffer Support 00:13:39.059 ================================ 00:13:39.059 Supported: No 00:13:39.059 00:13:39.059 Persistent Memory Region Support 00:13:39.059 ================================ 00:13:39.059 Supported: No 00:13:39.059 00:13:39.059 Admin Command Set Attributes 00:13:39.059 ============================ 00:13:39.059 Security Send/Receive: Not Supported 00:13:39.059 Format NVM: Supported 00:13:39.059 Firmware Activate/Download: Not Supported 00:13:39.059 Namespace Management: Supported 00:13:39.059 Device Self-Test: Not Supported 00:13:39.059 Directives: Supported 00:13:39.059 NVMe-MI: Not Supported 00:13:39.059 Virtualization Management: Not Supported 00:13:39.059 Doorbell Buffer Config: Supported 00:13:39.059 Get LBA Status Capability: Not Supported 00:13:39.059 Command & Feature Lockdown Capability: Not Supported 00:13:39.059 Abort Command Limit: 4 00:13:39.059 Async Event Request Limit: 4 00:13:39.059 Number of Firmware Slots: N/A 00:13:39.059 Firmware Slot 1 Read-Only: N/A 00:13:39.059 Firmware Activation Without Reset: N/A 00:13:39.059 Multiple Update Detection Support: N/A 00:13:39.059 Firmware Update Granularity: No Information Provided 00:13:39.059 Per-Namespace SMART Log: Yes 00:13:39.059 Asymmetric Namespace Access Log Page: Not Supported 00:13:39.059 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:39.059 Command Effects Log Page: Supported 00:13:39.059 Get Log Page Extended Data: Supported 00:13:39.059 Telemetry Log Pages: Not Supported 00:13:39.059 Persistent Event Log Pages: Not Supported 00:13:39.059 Supported Log Pages Log Page: May Support 00:13:39.059 Commands Supported & Effects Log Page: Not Supported 00:13:39.059 Feature Identifiers & Effects Log Page:May Support 00:13:39.059 NVMe-MI Commands & Effects Log Page: May Support 00:13:39.059 Data Area 4 for Telemetry Log: Not Supported 00:13:39.059 Error Log Page Entries Supported: 1 00:13:39.059 Keep Alive: Not Supported 00:13:39.059 00:13:39.059 NVM Command Set Attributes 00:13:39.059 ========================== 00:13:39.059 Submission Queue Entry Size 00:13:39.059 Max: 64 00:13:39.059 Min: 64 00:13:39.059 Completion Queue Entry Size 00:13:39.059 Max: 16 00:13:39.059 Min: 16 00:13:39.059 Number of Namespaces: 256 00:13:39.059 Compare Command: Supported 00:13:39.059 Write Uncorrectable Command: Not Supported 00:13:39.059 Dataset Management Command: Supported 00:13:39.059 Write Zeroes Command: Supported 00:13:39.059 Set Features Save Field: Supported 00:13:39.059 Reservations: Not Supported 00:13:39.059 Timestamp: Supported 00:13:39.059 Copy: Supported 00:13:39.059 Volatile Write Cache: Present 00:13:39.059 Atomic Write Unit (Normal): 1 00:13:39.059 Atomic Write Unit (PFail): 1 00:13:39.059 Atomic Compare & Write Unit: 1 00:13:39.059 Fused Compare & Write: Not Supported 00:13:39.059 Scatter-Gather List 00:13:39.059 SGL Command Set: Supported 00:13:39.059 SGL Keyed: Not Supported 00:13:39.059 SGL Bit Bucket Descriptor: Not Supported 00:13:39.059 SGL Metadata Pointer: Not Supported 00:13:39.059 Oversized SGL: Not Supported 00:13:39.059 SGL Metadata Address: Not Supported 00:13:39.059 SGL Offset: Not Supported 00:13:39.059 Transport SGL Data Block: Not Supported 00:13:39.059 Replay Protected Memory Block: Not Supported 00:13:39.059 00:13:39.059 Firmware Slot Information 00:13:39.059 ========================= 00:13:39.059 Active slot: 1 00:13:39.059 Slot 1 Firmware Revision: 1.0 00:13:39.059 00:13:39.059 00:13:39.059 Commands Supported and Effects 00:13:39.059 ============================== 00:13:39.059 Admin Commands 00:13:39.059 -------------- 00:13:39.059 Delete I/O Submission Queue (00h): Supported 00:13:39.059 Create I/O Submission Queue (01h): Supported 00:13:39.059 Get Log Page (02h): Supported 00:13:39.059 Delete I/O Completion Queue (04h): Supported 00:13:39.059 Create I/O Completion Queue (05h): Supported 00:13:39.059 Identify (06h): Supported 00:13:39.059 Abort (08h): Supported 00:13:39.059 Set Features (09h): Supported 00:13:39.059 Get Features (0Ah): Supported 00:13:39.059 Asynchronous Event Request (0Ch): Supported 00:13:39.059 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:39.059 Directive Send (19h): Supported 00:13:39.059 Directive Receive (1Ah): Supported 00:13:39.059 Virtualization Management (1Ch): Supported 00:13:39.059 Doorbell Buffer Config (7Ch): Supported 00:13:39.059 Format NVM (80h): Supported LBA-Change 00:13:39.059 I/O Commands 00:13:39.059 ------------ 00:13:39.059 Flush (00h): Supported LBA-Change 00:13:39.059 Write (01h): Supported LBA-Change 00:13:39.059 Read (02h): Supported 00:13:39.059 Compare (05h): Supported 00:13:39.059 Write Zeroes (08h): Supported LBA-Change 00:13:39.059 Dataset Management (09h): Supported LBA-Change 00:13:39.059 Unknown (0Ch): Supported 00:13:39.059 Unknown (12h): Supported 00:13:39.059 Copy (19h): Supported LBA-Change 00:13:39.059 Unknown (1Dh): Supported LBA-Change 00:13:39.059 00:13:39.059 Error Log 00:13:39.059 ========= 00:13:39.059 00:13:39.059 Arbitration 00:13:39.059 =========== 00:13:39.059 Arbitration Burst: no limit 00:13:39.059 00:13:39.059 Power Management 00:13:39.059 ================ 00:13:39.059 Number of Power States: 1 00:13:39.059 Current Power State: Power State #0 00:13:39.059 Power State #0: 00:13:39.059 Max Power: 25.00 W 00:13:39.059 Non-Operational State: Operational 00:13:39.059 Entry Latency: 16 microseconds 00:13:39.059 Exit Latency: 4 microseconds 00:13:39.059 Relative Read Throughput: 0 00:13:39.059 Relative Read Latency: 0 00:13:39.059 Relative Write Throughput: 0 00:13:39.059 Relative Write Latency: 0 00:13:39.059 Idle Power[2024-11-25 10:19:33.235345] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64339 terminated unexpected 00:13:39.059 : Not Reported 00:13:39.059 Active Power: Not Reported 00:13:39.059 Non-Operational Permissive Mode: Not Supported 00:13:39.059 00:13:39.059 Health Information 00:13:39.059 ================== 00:13:39.059 Critical Warnings: 00:13:39.059 Available Spare Space: OK 00:13:39.059 Temperature: OK 00:13:39.059 Device Reliability: OK 00:13:39.059 Read Only: No 00:13:39.059 Volatile Memory Backup: OK 00:13:39.059 Current Temperature: 323 Kelvin (50 Celsius) 00:13:39.059 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:39.059 Available Spare: 0% 00:13:39.059 Available Spare Threshold: 0% 00:13:39.059 Life Percentage Used: 0% 00:13:39.059 Data Units Read: 704 00:13:39.059 Data Units Written: 632 00:13:39.059 Host Read Commands: 34142 00:13:39.059 Host Write Commands: 33928 00:13:39.059 Controller Busy Time: 0 minutes 00:13:39.059 Power Cycles: 0 00:13:39.059 Power On Hours: 0 hours 00:13:39.059 Unsafe Shutdowns: 0 00:13:39.059 Unrecoverable Media Errors: 0 00:13:39.059 Lifetime Error Log Entries: 0 00:13:39.059 Warning Temperature Time: 0 minutes 00:13:39.059 Critical Temperature Time: 0 minutes 00:13:39.059 00:13:39.059 Number of Queues 00:13:39.059 ================ 00:13:39.059 Number of I/O Submission Queues: 64 00:13:39.059 Number of I/O Completion Queues: 64 00:13:39.059 00:13:39.059 ZNS Specific Controller Data 00:13:39.059 ============================ 00:13:39.059 Zone Append Size Limit: 0 00:13:39.059 00:13:39.059 00:13:39.059 Active Namespaces 00:13:39.059 ================= 00:13:39.059 Namespace ID:1 00:13:39.059 Error Recovery Timeout: Unlimited 00:13:39.059 Command Set Identifier: NVM (00h) 00:13:39.059 Deallocate: Supported 00:13:39.059 Deallocated/Unwritten Error: Supported 00:13:39.059 Deallocated Read Value: All 0x00 00:13:39.059 Deallocate in Write Zeroes: Not Supported 00:13:39.059 Deallocated Guard Field: 0xFFFF 00:13:39.059 Flush: Supported 00:13:39.059 Reservation: Not Supported 00:13:39.059 Metadata Transferred as: Separate Metadata Buffer 00:13:39.059 Namespace Sharing Capabilities: Private 00:13:39.059 Size (in LBAs): 1548666 (5GiB) 00:13:39.059 Capacity (in LBAs): 1548666 (5GiB) 00:13:39.059 Utilization (in LBAs): 1548666 (5GiB) 00:13:39.059 Thin Provisioning: Not Supported 00:13:39.059 Per-NS Atomic Units: No 00:13:39.059 Maximum Single Source Range Length: 128 00:13:39.059 Maximum Copy Length: 128 00:13:39.059 Maximum Source Range Count: 128 00:13:39.059 NGUID/EUI64 Never Reused: No 00:13:39.059 Namespace Write Protected: No 00:13:39.059 Number of LBA Formats: 8 00:13:39.059 Current LBA Format: LBA Format #07 00:13:39.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.059 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:39.059 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:39.059 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:39.059 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:39.059 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:39.060 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:39.060 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:39.060 00:13:39.060 NVM Specific Namespace Data 00:13:39.060 =========================== 00:13:39.060 Logical Block Storage Tag Mask: 0 00:13:39.060 Protection Information Capabilities: 00:13:39.060 16b Guard Protection Information Storage Tag Support: No 00:13:39.060 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:39.060 Storage Tag Check Read Support: No 00:13:39.060 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.060 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.060 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.060 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.060 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.060 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.060 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.060 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.060 ===================================================== 00:13:39.060 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:39.060 ===================================================== 00:13:39.060 Controller Capabilities/Features 00:13:39.060 ================================ 00:13:39.060 Vendor ID: 1b36 00:13:39.060 Subsystem Vendor ID: 1af4 00:13:39.060 Serial Number: 12341 00:13:39.060 Model Number: QEMU NVMe Ctrl 00:13:39.060 Firmware Version: 8.0.0 00:13:39.060 Recommended Arb Burst: 6 00:13:39.060 IEEE OUI Identifier: 00 54 52 00:13:39.060 Multi-path I/O 00:13:39.060 May have multiple subsystem ports: No 00:13:39.060 May have multiple controllers: No 00:13:39.060 Associated with SR-IOV VF: No 00:13:39.060 Max Data Transfer Size: 524288 00:13:39.060 Max Number of Namespaces: 256 00:13:39.060 Max Number of I/O Queues: 64 00:13:39.060 NVMe Specification Version (VS): 1.4 00:13:39.060 NVMe Specification Version (Identify): 1.4 00:13:39.060 Maximum Queue Entries: 2048 00:13:39.060 Contiguous Queues Required: Yes 00:13:39.060 Arbitration Mechanisms Supported 00:13:39.060 Weighted Round Robin: Not Supported 00:13:39.060 Vendor Specific: Not Supported 00:13:39.060 Reset Timeout: 7500 ms 00:13:39.060 Doorbell Stride: 4 bytes 00:13:39.060 NVM Subsystem Reset: Not Supported 00:13:39.060 Command Sets Supported 00:13:39.060 NVM Command Set: Supported 00:13:39.060 Boot Partition: Not Supported 00:13:39.060 Memory Page Size Minimum: 4096 bytes 00:13:39.060 Memory Page Size Maximum: 65536 bytes 00:13:39.060 Persistent Memory Region: Not Supported 00:13:39.060 Optional Asynchronous Events Supported 00:13:39.060 Namespace Attribute Notices: Supported 00:13:39.060 Firmware Activation Notices: Not Supported 00:13:39.060 ANA Change Notices: Not Supported 00:13:39.060 PLE Aggregate Log Change Notices: Not Supported 00:13:39.060 LBA Status Info Alert Notices: Not Supported 00:13:39.060 EGE Aggregate Log Change Notices: Not Supported 00:13:39.060 Normal NVM Subsystem Shutdown event: Not Supported 00:13:39.060 Zone Descriptor Change Notices: Not Supported 00:13:39.060 Discovery Log Change Notices: Not Supported 00:13:39.060 Controller Attributes 00:13:39.060 128-bit Host Identifier: Not Supported 00:13:39.060 Non-Operational Permissive Mode: Not Supported 00:13:39.060 NVM Sets: Not Supported 00:13:39.060 Read Recovery Levels: Not Supported 00:13:39.060 Endurance Groups: Not Supported 00:13:39.060 Predictable Latency Mode: Not Supported 00:13:39.060 Traffic Based Keep ALive: Not Supported 00:13:39.060 Namespace Granularity: Not Supported 00:13:39.060 SQ Associations: Not Supported 00:13:39.060 UUID List: Not Supported 00:13:39.060 Multi-Domain Subsystem: Not Supported 00:13:39.060 Fixed Capacity Management: Not Supported 00:13:39.060 Variable Capacity Management: Not Supported 00:13:39.060 Delete Endurance Group: Not Supported 00:13:39.060 Delete NVM Set: Not Supported 00:13:39.060 Extended LBA Formats Supported: Supported 00:13:39.060 Flexible Data Placement Supported: Not Supported 00:13:39.060 00:13:39.060 Controller Memory Buffer Support 00:13:39.060 ================================ 00:13:39.060 Supported: No 00:13:39.060 00:13:39.060 Persistent Memory Region Support 00:13:39.060 ================================ 00:13:39.060 Supported: No 00:13:39.060 00:13:39.060 Admin Command Set Attributes 00:13:39.060 ============================ 00:13:39.060 Security Send/Receive: Not Supported 00:13:39.060 Format NVM: Supported 00:13:39.060 Firmware Activate/Download: Not Supported 00:13:39.060 Namespace Management: Supported 00:13:39.060 Device Self-Test: Not Supported 00:13:39.060 Directives: Supported 00:13:39.060 NVMe-MI: Not Supported 00:13:39.060 Virtualization Management: Not Supported 00:13:39.060 Doorbell Buffer Config: Supported 00:13:39.060 Get LBA Status Capability: Not Supported 00:13:39.060 Command & Feature Lockdown Capability: Not Supported 00:13:39.060 Abort Command Limit: 4 00:13:39.060 Async Event Request Limit: 4 00:13:39.060 Number of Firmware Slots: N/A 00:13:39.060 Firmware Slot 1 Read-Only: N/A 00:13:39.060 Firmware Activation Without Reset: N/A 00:13:39.060 Multiple Update Detection Support: N/A 00:13:39.060 Firmware Update Granularity: No Information Provided 00:13:39.060 Per-Namespace SMART Log: Yes 00:13:39.060 Asymmetric Namespace Access Log Page: Not Supported 00:13:39.060 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:39.060 Command Effects Log Page: Supported 00:13:39.060 Get Log Page Extended Data: Supported 00:13:39.060 Telemetry Log Pages: Not Supported 00:13:39.060 Persistent Event Log Pages: Not Supported 00:13:39.060 Supported Log Pages Log Page: May Support 00:13:39.060 Commands Supported & Effects Log Page: Not Supported 00:13:39.060 Feature Identifiers & Effects Log Page:May Support 00:13:39.060 NVMe-MI Commands & Effects Log Page: May Support 00:13:39.060 Data Area 4 for Telemetry Log: Not Supported 00:13:39.060 Error Log Page Entries Supported: 1 00:13:39.060 Keep Alive: Not Supported 00:13:39.060 00:13:39.060 NVM Command Set Attributes 00:13:39.060 ========================== 00:13:39.060 Submission Queue Entry Size 00:13:39.060 Max: 64 00:13:39.060 Min: 64 00:13:39.060 Completion Queue Entry Size 00:13:39.060 Max: 16 00:13:39.060 Min: 16 00:13:39.060 Number of Namespaces: 256 00:13:39.060 Compare Command: Supported 00:13:39.060 Write Uncorrectable Command: Not Supported 00:13:39.060 Dataset Management Command: Supported 00:13:39.060 Write Zeroes Command: Supported 00:13:39.060 Set Features Save Field: Supported 00:13:39.060 Reservations: Not Supported 00:13:39.060 Timestamp: Supported 00:13:39.060 Copy: Supported 00:13:39.060 Volatile Write Cache: Present 00:13:39.060 Atomic Write Unit (Normal): 1 00:13:39.060 Atomic Write Unit (PFail): 1 00:13:39.060 Atomic Compare & Write Unit: 1 00:13:39.060 Fused Compare & Write: Not Supported 00:13:39.060 Scatter-Gather List 00:13:39.060 SGL Command Set: Supported 00:13:39.060 SGL Keyed: Not Supported 00:13:39.060 SGL Bit Bucket Descriptor: Not Supported 00:13:39.060 SGL Metadata Pointer: Not Supported 00:13:39.060 Oversized SGL: Not Supported 00:13:39.060 SGL Metadata Address: Not Supported 00:13:39.060 SGL Offset: Not Supported 00:13:39.060 Transport SGL Data Block: Not Supported 00:13:39.060 Replay Protected Memory Block: Not Supported 00:13:39.060 00:13:39.060 Firmware Slot Information 00:13:39.060 ========================= 00:13:39.060 Active slot: 1 00:13:39.060 Slot 1 Firmware Revision: 1.0 00:13:39.060 00:13:39.060 00:13:39.060 Commands Supported and Effects 00:13:39.060 ============================== 00:13:39.060 Admin Commands 00:13:39.060 -------------- 00:13:39.060 Delete I/O Submission Queue (00h): Supported 00:13:39.060 Create I/O Submission Queue (01h): Supported 00:13:39.060 Get Log Page (02h): Supported 00:13:39.060 Delete I/O Completion Queue (04h): Supported 00:13:39.060 Create I/O Completion Queue (05h): Supported 00:13:39.060 Identify (06h): Supported 00:13:39.060 Abort (08h): Supported 00:13:39.060 Set Features (09h): Supported 00:13:39.060 Get Features (0Ah): Supported 00:13:39.060 Asynchronous Event Request (0Ch): Supported 00:13:39.060 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:39.060 Directive Send (19h): Supported 00:13:39.060 Directive Receive (1Ah): Supported 00:13:39.060 Virtualization Management (1Ch): Supported 00:13:39.060 Doorbell Buffer Config (7Ch): Supported 00:13:39.060 Format NVM (80h): Supported LBA-Change 00:13:39.060 I/O Commands 00:13:39.060 ------------ 00:13:39.060 Flush (00h): Supported LBA-Change 00:13:39.060 Write (01h): Supported LBA-Change 00:13:39.060 Read (02h): Supported 00:13:39.060 Compare (05h): Supported 00:13:39.060 Write Zeroes (08h): Supported LBA-Change 00:13:39.060 Dataset Management (09h): Supported LBA-Change 00:13:39.060 Unknown (0Ch): Supported 00:13:39.060 Unknown (12h): Supported 00:13:39.061 Copy (19h): Supported LBA-Change 00:13:39.061 Unknown (1Dh): Supported LBA-Change 00:13:39.061 00:13:39.061 Error Log 00:13:39.061 ========= 00:13:39.061 00:13:39.061 Arbitration 00:13:39.061 =========== 00:13:39.061 Arbitration Burst: no limit 00:13:39.061 00:13:39.061 Power Management 00:13:39.061 ================ 00:13:39.061 Number of Power States: 1 00:13:39.061 Current Power State: Power State #0 00:13:39.061 Power State #0: 00:13:39.061 Max Power: 25.00 W 00:13:39.061 Non-Operational State: Operational 00:13:39.061 Entry Latency: 16 microseconds 00:13:39.061 Exit Latency: 4 microseconds 00:13:39.061 Relative Read Throughput: 0 00:13:39.061 Relative Read Latency: 0 00:13:39.061 Relative Write Throughput: 0 00:13:39.061 Relative Write Latency: 0 00:13:39.061 Idle Power: Not Reported 00:13:39.061 Active Power: Not Reported 00:13:39.061 Non-Operational Permissive Mode: Not Supported 00:13:39.061 00:13:39.061 Health Information 00:13:39.061 ================== 00:13:39.061 Critical Warnings: 00:13:39.061 Available Spare Space: OK 00:13:39.061 Temperature: [2024-11-25 10:19:33.236437] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64339 terminated unexpected 00:13:39.061 OK 00:13:39.061 Device Reliability: OK 00:13:39.061 Read Only: No 00:13:39.061 Volatile Memory Backup: OK 00:13:39.061 Current Temperature: 323 Kelvin (50 Celsius) 00:13:39.061 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:39.061 Available Spare: 0% 00:13:39.061 Available Spare Threshold: 0% 00:13:39.061 Life Percentage Used: 0% 00:13:39.061 Data Units Read: 1071 00:13:39.061 Data Units Written: 938 00:13:39.061 Host Read Commands: 50728 00:13:39.061 Host Write Commands: 49516 00:13:39.061 Controller Busy Time: 0 minutes 00:13:39.061 Power Cycles: 0 00:13:39.061 Power On Hours: 0 hours 00:13:39.061 Unsafe Shutdowns: 0 00:13:39.061 Unrecoverable Media Errors: 0 00:13:39.061 Lifetime Error Log Entries: 0 00:13:39.061 Warning Temperature Time: 0 minutes 00:13:39.061 Critical Temperature Time: 0 minutes 00:13:39.061 00:13:39.061 Number of Queues 00:13:39.061 ================ 00:13:39.061 Number of I/O Submission Queues: 64 00:13:39.061 Number of I/O Completion Queues: 64 00:13:39.061 00:13:39.061 ZNS Specific Controller Data 00:13:39.061 ============================ 00:13:39.061 Zone Append Size Limit: 0 00:13:39.061 00:13:39.061 00:13:39.061 Active Namespaces 00:13:39.061 ================= 00:13:39.061 Namespace ID:1 00:13:39.061 Error Recovery Timeout: Unlimited 00:13:39.061 Command Set Identifier: NVM (00h) 00:13:39.061 Deallocate: Supported 00:13:39.061 Deallocated/Unwritten Error: Supported 00:13:39.061 Deallocated Read Value: All 0x00 00:13:39.061 Deallocate in Write Zeroes: Not Supported 00:13:39.061 Deallocated Guard Field: 0xFFFF 00:13:39.061 Flush: Supported 00:13:39.061 Reservation: Not Supported 00:13:39.061 Namespace Sharing Capabilities: Private 00:13:39.061 Size (in LBAs): 1310720 (5GiB) 00:13:39.061 Capacity (in LBAs): 1310720 (5GiB) 00:13:39.061 Utilization (in LBAs): 1310720 (5GiB) 00:13:39.061 Thin Provisioning: Not Supported 00:13:39.061 Per-NS Atomic Units: No 00:13:39.061 Maximum Single Source Range Length: 128 00:13:39.061 Maximum Copy Length: 128 00:13:39.061 Maximum Source Range Count: 128 00:13:39.061 NGUID/EUI64 Never Reused: No 00:13:39.061 Namespace Write Protected: No 00:13:39.061 Number of LBA Formats: 8 00:13:39.061 Current LBA Format: LBA Format #04 00:13:39.061 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.061 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:39.061 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:39.061 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:39.061 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:39.061 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:39.061 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:39.061 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:39.061 00:13:39.061 NVM Specific Namespace Data 00:13:39.061 =========================== 00:13:39.061 Logical Block Storage Tag Mask: 0 00:13:39.061 Protection Information Capabilities: 00:13:39.061 16b Guard Protection Information Storage Tag Support: No 00:13:39.061 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:39.061 Storage Tag Check Read Support: No 00:13:39.061 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.061 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.061 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.061 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.061 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.061 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.061 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.061 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.061 ===================================================== 00:13:39.061 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:39.061 ===================================================== 00:13:39.061 Controller Capabilities/Features 00:13:39.061 ================================ 00:13:39.061 Vendor ID: 1b36 00:13:39.061 Subsystem Vendor ID: 1af4 00:13:39.061 Serial Number: 12343 00:13:39.061 Model Number: QEMU NVMe Ctrl 00:13:39.061 Firmware Version: 8.0.0 00:13:39.061 Recommended Arb Burst: 6 00:13:39.061 IEEE OUI Identifier: 00 54 52 00:13:39.061 Multi-path I/O 00:13:39.061 May have multiple subsystem ports: No 00:13:39.061 May have multiple controllers: Yes 00:13:39.061 Associated with SR-IOV VF: No 00:13:39.061 Max Data Transfer Size: 524288 00:13:39.061 Max Number of Namespaces: 256 00:13:39.061 Max Number of I/O Queues: 64 00:13:39.061 NVMe Specification Version (VS): 1.4 00:13:39.061 NVMe Specification Version (Identify): 1.4 00:13:39.061 Maximum Queue Entries: 2048 00:13:39.061 Contiguous Queues Required: Yes 00:13:39.061 Arbitration Mechanisms Supported 00:13:39.061 Weighted Round Robin: Not Supported 00:13:39.061 Vendor Specific: Not Supported 00:13:39.061 Reset Timeout: 7500 ms 00:13:39.061 Doorbell Stride: 4 bytes 00:13:39.061 NVM Subsystem Reset: Not Supported 00:13:39.061 Command Sets Supported 00:13:39.061 NVM Command Set: Supported 00:13:39.061 Boot Partition: Not Supported 00:13:39.061 Memory Page Size Minimum: 4096 bytes 00:13:39.061 Memory Page Size Maximum: 65536 bytes 00:13:39.061 Persistent Memory Region: Not Supported 00:13:39.061 Optional Asynchronous Events Supported 00:13:39.061 Namespace Attribute Notices: Supported 00:13:39.061 Firmware Activation Notices: Not Supported 00:13:39.061 ANA Change Notices: Not Supported 00:13:39.061 PLE Aggregate Log Change Notices: Not Supported 00:13:39.061 LBA Status Info Alert Notices: Not Supported 00:13:39.061 EGE Aggregate Log Change Notices: Not Supported 00:13:39.061 Normal NVM Subsystem Shutdown event: Not Supported 00:13:39.061 Zone Descriptor Change Notices: Not Supported 00:13:39.061 Discovery Log Change Notices: Not Supported 00:13:39.061 Controller Attributes 00:13:39.061 128-bit Host Identifier: Not Supported 00:13:39.061 Non-Operational Permissive Mode: Not Supported 00:13:39.061 NVM Sets: Not Supported 00:13:39.061 Read Recovery Levels: Not Supported 00:13:39.061 Endurance Groups: Supported 00:13:39.061 Predictable Latency Mode: Not Supported 00:13:39.061 Traffic Based Keep ALive: Not Supported 00:13:39.061 Namespace Granularity: Not Supported 00:13:39.061 SQ Associations: Not Supported 00:13:39.061 UUID List: Not Supported 00:13:39.061 Multi-Domain Subsystem: Not Supported 00:13:39.061 Fixed Capacity Management: Not Supported 00:13:39.061 Variable Capacity Management: Not Supported 00:13:39.061 Delete Endurance Group: Not Supported 00:13:39.061 Delete NVM Set: Not Supported 00:13:39.061 Extended LBA Formats Supported: Supported 00:13:39.061 Flexible Data Placement Supported: Supported 00:13:39.061 00:13:39.061 Controller Memory Buffer Support 00:13:39.061 ================================ 00:13:39.061 Supported: No 00:13:39.061 00:13:39.061 Persistent Memory Region Support 00:13:39.061 ================================ 00:13:39.061 Supported: No 00:13:39.061 00:13:39.061 Admin Command Set Attributes 00:13:39.061 ============================ 00:13:39.061 Security Send/Receive: Not Supported 00:13:39.061 Format NVM: Supported 00:13:39.061 Firmware Activate/Download: Not Supported 00:13:39.061 Namespace Management: Supported 00:13:39.061 Device Self-Test: Not Supported 00:13:39.061 Directives: Supported 00:13:39.061 NVMe-MI: Not Supported 00:13:39.061 Virtualization Management: Not Supported 00:13:39.061 Doorbell Buffer Config: Supported 00:13:39.061 Get LBA Status Capability: Not Supported 00:13:39.061 Command & Feature Lockdown Capability: Not Supported 00:13:39.061 Abort Command Limit: 4 00:13:39.062 Async Event Request Limit: 4 00:13:39.062 Number of Firmware Slots: N/A 00:13:39.062 Firmware Slot 1 Read-Only: N/A 00:13:39.062 Firmware Activation Without Reset: N/A 00:13:39.062 Multiple Update Detection Support: N/A 00:13:39.062 Firmware Update Granularity: No Information Provided 00:13:39.062 Per-Namespace SMART Log: Yes 00:13:39.062 Asymmetric Namespace Access Log Page: Not Supported 00:13:39.062 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:39.062 Command Effects Log Page: Supported 00:13:39.062 Get Log Page Extended Data: Supported 00:13:39.062 Telemetry Log Pages: Not Supported 00:13:39.062 Persistent Event Log Pages: Not Supported 00:13:39.062 Supported Log Pages Log Page: May Support 00:13:39.062 Commands Supported & Effects Log Page: Not Supported 00:13:39.062 Feature Identifiers & Effects Log Page:May Support 00:13:39.062 NVMe-MI Commands & Effects Log Page: May Support 00:13:39.062 Data Area 4 for Telemetry Log: Not Supported 00:13:39.062 Error Log Page Entries Supported: 1 00:13:39.062 Keep Alive: Not Supported 00:13:39.062 00:13:39.062 NVM Command Set Attributes 00:13:39.062 ========================== 00:13:39.062 Submission Queue Entry Size 00:13:39.062 Max: 64 00:13:39.062 Min: 64 00:13:39.062 Completion Queue Entry Size 00:13:39.062 Max: 16 00:13:39.062 Min: 16 00:13:39.062 Number of Namespaces: 256 00:13:39.062 Compare Command: Supported 00:13:39.062 Write Uncorrectable Command: Not Supported 00:13:39.062 Dataset Management Command: Supported 00:13:39.062 Write Zeroes Command: Supported 00:13:39.062 Set Features Save Field: Supported 00:13:39.062 Reservations: Not Supported 00:13:39.062 Timestamp: Supported 00:13:39.062 Copy: Supported 00:13:39.062 Volatile Write Cache: Present 00:13:39.062 Atomic Write Unit (Normal): 1 00:13:39.062 Atomic Write Unit (PFail): 1 00:13:39.062 Atomic Compare & Write Unit: 1 00:13:39.062 Fused Compare & Write: Not Supported 00:13:39.062 Scatter-Gather List 00:13:39.062 SGL Command Set: Supported 00:13:39.062 SGL Keyed: Not Supported 00:13:39.062 SGL Bit Bucket Descriptor: Not Supported 00:13:39.062 SGL Metadata Pointer: Not Supported 00:13:39.062 Oversized SGL: Not Supported 00:13:39.062 SGL Metadata Address: Not Supported 00:13:39.062 SGL Offset: Not Supported 00:13:39.062 Transport SGL Data Block: Not Supported 00:13:39.062 Replay Protected Memory Block: Not Supported 00:13:39.062 00:13:39.062 Firmware Slot Information 00:13:39.062 ========================= 00:13:39.062 Active slot: 1 00:13:39.062 Slot 1 Firmware Revision: 1.0 00:13:39.062 00:13:39.062 00:13:39.062 Commands Supported and Effects 00:13:39.062 ============================== 00:13:39.062 Admin Commands 00:13:39.062 -------------- 00:13:39.062 Delete I/O Submission Queue (00h): Supported 00:13:39.062 Create I/O Submission Queue (01h): Supported 00:13:39.062 Get Log Page (02h): Supported 00:13:39.062 Delete I/O Completion Queue (04h): Supported 00:13:39.062 Create I/O Completion Queue (05h): Supported 00:13:39.062 Identify (06h): Supported 00:13:39.062 Abort (08h): Supported 00:13:39.062 Set Features (09h): Supported 00:13:39.062 Get Features (0Ah): Supported 00:13:39.062 Asynchronous Event Request (0Ch): Supported 00:13:39.062 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:39.062 Directive Send (19h): Supported 00:13:39.062 Directive Receive (1Ah): Supported 00:13:39.062 Virtualization Management (1Ch): Supported 00:13:39.062 Doorbell Buffer Config (7Ch): Supported 00:13:39.062 Format NVM (80h): Supported LBA-Change 00:13:39.062 I/O Commands 00:13:39.062 ------------ 00:13:39.062 Flush (00h): Supported LBA-Change 00:13:39.062 Write (01h): Supported LBA-Change 00:13:39.062 Read (02h): Supported 00:13:39.062 Compare (05h): Supported 00:13:39.062 Write Zeroes (08h): Supported LBA-Change 00:13:39.062 Dataset Management (09h): Supported LBA-Change 00:13:39.062 Unknown (0Ch): Supported 00:13:39.062 Unknown (12h): Supported 00:13:39.062 Copy (19h): Supported LBA-Change 00:13:39.062 Unknown (1Dh): Supported LBA-Change 00:13:39.062 00:13:39.062 Error Log 00:13:39.062 ========= 00:13:39.062 00:13:39.062 Arbitration 00:13:39.062 =========== 00:13:39.062 Arbitration Burst: no limit 00:13:39.062 00:13:39.062 Power Management 00:13:39.062 ================ 00:13:39.062 Number of Power States: 1 00:13:39.062 Current Power State: Power State #0 00:13:39.062 Power State #0: 00:13:39.062 Max Power: 25.00 W 00:13:39.062 Non-Operational State: Operational 00:13:39.062 Entry Latency: 16 microseconds 00:13:39.062 Exit Latency: 4 microseconds 00:13:39.062 Relative Read Throughput: 0 00:13:39.062 Relative Read Latency: 0 00:13:39.062 Relative Write Throughput: 0 00:13:39.062 Relative Write Latency: 0 00:13:39.062 Idle Power: Not Reported 00:13:39.062 Active Power: Not Reported 00:13:39.062 Non-Operational Permissive Mode: Not Supported 00:13:39.062 00:13:39.062 Health Information 00:13:39.062 ================== 00:13:39.062 Critical Warnings: 00:13:39.062 Available Spare Space: OK 00:13:39.062 Temperature: OK 00:13:39.062 Device Reliability: OK 00:13:39.062 Read Only: No 00:13:39.062 Volatile Memory Backup: OK 00:13:39.062 Current Temperature: 323 Kelvin (50 Celsius) 00:13:39.062 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:39.062 Available Spare: 0% 00:13:39.062 Available Spare Threshold: 0% 00:13:39.062 Life Percentage Used: 0% 00:13:39.062 Data Units Read: 784 00:13:39.062 Data Units Written: 713 00:13:39.062 Host Read Commands: 34945 00:13:39.062 Host Write Commands: 34368 00:13:39.062 Controller Busy Time: 0 minutes 00:13:39.062 Power Cycles: 0 00:13:39.062 Power On Hours: 0 hours 00:13:39.062 Unsafe Shutdowns: 0 00:13:39.062 Unrecoverable Media Errors: 0 00:13:39.062 Lifetime Error Log Entries: 0 00:13:39.062 Warning Temperature Time: 0 minutes 00:13:39.062 Critical Temperature Time: 0 minutes 00:13:39.062 00:13:39.062 Number of Queues 00:13:39.062 ================ 00:13:39.062 Number of I/O Submission Queues: 64 00:13:39.062 Number of I/O Completion Queues: 64 00:13:39.062 00:13:39.062 ZNS Specific Controller Data 00:13:39.062 ============================ 00:13:39.062 Zone Append Size Limit: 0 00:13:39.062 00:13:39.062 00:13:39.062 Active Namespaces 00:13:39.062 ================= 00:13:39.062 Namespace ID:1 00:13:39.062 Error Recovery Timeout: Unlimited 00:13:39.062 Command Set Identifier: NVM (00h) 00:13:39.062 Deallocate: Supported 00:13:39.062 Deallocated/Unwritten Error: Supported 00:13:39.062 Deallocated Read Value: All 0x00 00:13:39.062 Deallocate in Write Zeroes: Not Supported 00:13:39.062 Deallocated Guard Field: 0xFFFF 00:13:39.062 Flush: Supported 00:13:39.062 Reservation: Not Supported 00:13:39.062 Namespace Sharing Capabilities: Multiple Controllers 00:13:39.062 Size (in LBAs): 262144 (1GiB) 00:13:39.062 Capacity (in LBAs): 262144 (1GiB) 00:13:39.062 Utilization (in LBAs): 262144 (1GiB) 00:13:39.062 Thin Provisioning: Not Supported 00:13:39.062 Per-NS Atomic Units: No 00:13:39.062 Maximum Single Source Range Length: 128 00:13:39.062 Maximum Copy Length: 128 00:13:39.062 Maximum Source Range Count: 128 00:13:39.062 NGUID/EUI64 Never Reused: No 00:13:39.062 Namespace Write Protected: No 00:13:39.062 Endurance group ID: 1 00:13:39.062 Number of LBA Formats: 8 00:13:39.062 Current LBA Format: LBA Format #04 00:13:39.062 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.062 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:39.062 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:39.062 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:39.062 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:39.062 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:39.062 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:39.062 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:39.062 00:13:39.062 Get Feature FDP: 00:13:39.062 ================ 00:13:39.062 Enabled: Yes 00:13:39.062 FDP configuration index: 0 00:13:39.062 00:13:39.062 FDP configurations log page 00:13:39.062 =========================== 00:13:39.062 Number of FDP configurations: 1 00:13:39.062 Version: 0 00:13:39.062 Size: 112 00:13:39.062 FDP Configuration Descriptor: 0 00:13:39.062 Descriptor Size: 96 00:13:39.062 Reclaim Group Identifier format: 2 00:13:39.062 FDP Volatile Write Cache: Not Present 00:13:39.062 FDP Configuration: Valid 00:13:39.062 Vendor Specific Size: 0 00:13:39.062 Number of Reclaim Groups: 2 00:13:39.062 Number of Recalim Unit Handles: 8 00:13:39.062 Max Placement Identifiers: 128 00:13:39.062 Number of Namespaces Suppprted: 256 00:13:39.062 Reclaim unit Nominal Size: 6000000 bytes 00:13:39.062 Estimated Reclaim Unit Time Limit: Not Reported 00:13:39.062 RUH Desc #000: RUH Type: Initially Isolated 00:13:39.063 RUH Desc #001: RUH Type: Initially Isolated 00:13:39.063 RUH Desc #002: RUH Type: Initially Isolated 00:13:39.063 RUH Desc #003: RUH Type: Initially Isolated 00:13:39.063 RUH Desc #004: RUH Type: Initially Isolated 00:13:39.063 RUH Desc #005: RUH Type: Initially Isolated 00:13:39.063 RUH Desc #006: RUH Type: Initially Isolated 00:13:39.063 RUH Desc #007: RUH Type: Initially Isolated 00:13:39.063 00:13:39.063 FDP reclaim unit handle usage log page 00:13:39.063 ====================================== 00:13:39.063 Number of Reclaim Unit Handles: 8 00:13:39.063 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:39.063 RUH Usage Desc #001: RUH Attributes: Unused 00:13:39.063 RUH Usage Desc #002: RUH Attributes: Unused 00:13:39.063 RUH Usage Desc #003: RUH Attributes: Unused 00:13:39.063 RUH Usage Desc #004: RUH Attributes: Unused 00:13:39.063 RUH Usage Desc #005: RUH Attributes: Unused 00:13:39.063 RUH Usage Desc #006: RUH Attributes: Unused 00:13:39.063 RUH Usage Desc #007: RUH Attributes: Unused 00:13:39.063 00:13:39.063 FDP statistics log page 00:13:39.063 ======================= 00:13:39.063 Host bytes with metadata written: 441819136 00:13:39.063 Medi[2024-11-25 10:19:33.238646] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64339 terminated unexpected 00:13:39.063 a bytes with metadata written: 441884672 00:13:39.063 Media bytes erased: 0 00:13:39.063 00:13:39.063 FDP events log page 00:13:39.063 =================== 00:13:39.063 Number of FDP events: 0 00:13:39.063 00:13:39.063 NVM Specific Namespace Data 00:13:39.063 =========================== 00:13:39.063 Logical Block Storage Tag Mask: 0 00:13:39.063 Protection Information Capabilities: 00:13:39.063 16b Guard Protection Information Storage Tag Support: No 00:13:39.063 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:39.063 Storage Tag Check Read Support: No 00:13:39.063 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.063 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.063 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.063 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.063 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.063 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.063 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.063 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.063 ===================================================== 00:13:39.063 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:39.063 ===================================================== 00:13:39.063 Controller Capabilities/Features 00:13:39.063 ================================ 00:13:39.063 Vendor ID: 1b36 00:13:39.063 Subsystem Vendor ID: 1af4 00:13:39.063 Serial Number: 12342 00:13:39.063 Model Number: QEMU NVMe Ctrl 00:13:39.063 Firmware Version: 8.0.0 00:13:39.063 Recommended Arb Burst: 6 00:13:39.063 IEEE OUI Identifier: 00 54 52 00:13:39.063 Multi-path I/O 00:13:39.063 May have multiple subsystem ports: No 00:13:39.063 May have multiple controllers: No 00:13:39.063 Associated with SR-IOV VF: No 00:13:39.063 Max Data Transfer Size: 524288 00:13:39.063 Max Number of Namespaces: 256 00:13:39.063 Max Number of I/O Queues: 64 00:13:39.063 NVMe Specification Version (VS): 1.4 00:13:39.063 NVMe Specification Version (Identify): 1.4 00:13:39.063 Maximum Queue Entries: 2048 00:13:39.063 Contiguous Queues Required: Yes 00:13:39.063 Arbitration Mechanisms Supported 00:13:39.063 Weighted Round Robin: Not Supported 00:13:39.063 Vendor Specific: Not Supported 00:13:39.063 Reset Timeout: 7500 ms 00:13:39.063 Doorbell Stride: 4 bytes 00:13:39.063 NVM Subsystem Reset: Not Supported 00:13:39.063 Command Sets Supported 00:13:39.063 NVM Command Set: Supported 00:13:39.063 Boot Partition: Not Supported 00:13:39.063 Memory Page Size Minimum: 4096 bytes 00:13:39.063 Memory Page Size Maximum: 65536 bytes 00:13:39.063 Persistent Memory Region: Not Supported 00:13:39.063 Optional Asynchronous Events Supported 00:13:39.063 Namespace Attribute Notices: Supported 00:13:39.063 Firmware Activation Notices: Not Supported 00:13:39.063 ANA Change Notices: Not Supported 00:13:39.063 PLE Aggregate Log Change Notices: Not Supported 00:13:39.063 LBA Status Info Alert Notices: Not Supported 00:13:39.063 EGE Aggregate Log Change Notices: Not Supported 00:13:39.063 Normal NVM Subsystem Shutdown event: Not Supported 00:13:39.063 Zone Descriptor Change Notices: Not Supported 00:13:39.063 Discovery Log Change Notices: Not Supported 00:13:39.063 Controller Attributes 00:13:39.063 128-bit Host Identifier: Not Supported 00:13:39.063 Non-Operational Permissive Mode: Not Supported 00:13:39.063 NVM Sets: Not Supported 00:13:39.063 Read Recovery Levels: Not Supported 00:13:39.063 Endurance Groups: Not Supported 00:13:39.063 Predictable Latency Mode: Not Supported 00:13:39.063 Traffic Based Keep ALive: Not Supported 00:13:39.063 Namespace Granularity: Not Supported 00:13:39.063 SQ Associations: Not Supported 00:13:39.063 UUID List: Not Supported 00:13:39.063 Multi-Domain Subsystem: Not Supported 00:13:39.063 Fixed Capacity Management: Not Supported 00:13:39.063 Variable Capacity Management: Not Supported 00:13:39.063 Delete Endurance Group: Not Supported 00:13:39.063 Delete NVM Set: Not Supported 00:13:39.063 Extended LBA Formats Supported: Supported 00:13:39.063 Flexible Data Placement Supported: Not Supported 00:13:39.063 00:13:39.063 Controller Memory Buffer Support 00:13:39.063 ================================ 00:13:39.063 Supported: No 00:13:39.063 00:13:39.063 Persistent Memory Region Support 00:13:39.063 ================================ 00:13:39.063 Supported: No 00:13:39.063 00:13:39.063 Admin Command Set Attributes 00:13:39.063 ============================ 00:13:39.063 Security Send/Receive: Not Supported 00:13:39.063 Format NVM: Supported 00:13:39.063 Firmware Activate/Download: Not Supported 00:13:39.063 Namespace Management: Supported 00:13:39.063 Device Self-Test: Not Supported 00:13:39.063 Directives: Supported 00:13:39.063 NVMe-MI: Not Supported 00:13:39.063 Virtualization Management: Not Supported 00:13:39.063 Doorbell Buffer Config: Supported 00:13:39.063 Get LBA Status Capability: Not Supported 00:13:39.063 Command & Feature Lockdown Capability: Not Supported 00:13:39.063 Abort Command Limit: 4 00:13:39.063 Async Event Request Limit: 4 00:13:39.063 Number of Firmware Slots: N/A 00:13:39.063 Firmware Slot 1 Read-Only: N/A 00:13:39.063 Firmware Activation Without Reset: N/A 00:13:39.063 Multiple Update Detection Support: N/A 00:13:39.063 Firmware Update Granularity: No Information Provided 00:13:39.063 Per-Namespace SMART Log: Yes 00:13:39.063 Asymmetric Namespace Access Log Page: Not Supported 00:13:39.063 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:39.063 Command Effects Log Page: Supported 00:13:39.063 Get Log Page Extended Data: Supported 00:13:39.063 Telemetry Log Pages: Not Supported 00:13:39.063 Persistent Event Log Pages: Not Supported 00:13:39.063 Supported Log Pages Log Page: May Support 00:13:39.063 Commands Supported & Effects Log Page: Not Supported 00:13:39.063 Feature Identifiers & Effects Log Page:May Support 00:13:39.063 NVMe-MI Commands & Effects Log Page: May Support 00:13:39.063 Data Area 4 for Telemetry Log: Not Supported 00:13:39.063 Error Log Page Entries Supported: 1 00:13:39.063 Keep Alive: Not Supported 00:13:39.063 00:13:39.063 NVM Command Set Attributes 00:13:39.063 ========================== 00:13:39.063 Submission Queue Entry Size 00:13:39.064 Max: 64 00:13:39.064 Min: 64 00:13:39.064 Completion Queue Entry Size 00:13:39.064 Max: 16 00:13:39.064 Min: 16 00:13:39.064 Number of Namespaces: 256 00:13:39.064 Compare Command: Supported 00:13:39.064 Write Uncorrectable Command: Not Supported 00:13:39.064 Dataset Management Command: Supported 00:13:39.064 Write Zeroes Command: Supported 00:13:39.064 Set Features Save Field: Supported 00:13:39.064 Reservations: Not Supported 00:13:39.064 Timestamp: Supported 00:13:39.064 Copy: Supported 00:13:39.064 Volatile Write Cache: Present 00:13:39.064 Atomic Write Unit (Normal): 1 00:13:39.064 Atomic Write Unit (PFail): 1 00:13:39.064 Atomic Compare & Write Unit: 1 00:13:39.064 Fused Compare & Write: Not Supported 00:13:39.064 Scatter-Gather List 00:13:39.064 SGL Command Set: Supported 00:13:39.064 SGL Keyed: Not Supported 00:13:39.064 SGL Bit Bucket Descriptor: Not Supported 00:13:39.064 SGL Metadata Pointer: Not Supported 00:13:39.064 Oversized SGL: Not Supported 00:13:39.064 SGL Metadata Address: Not Supported 00:13:39.064 SGL Offset: Not Supported 00:13:39.064 Transport SGL Data Block: Not Supported 00:13:39.064 Replay Protected Memory Block: Not Supported 00:13:39.064 00:13:39.064 Firmware Slot Information 00:13:39.064 ========================= 00:13:39.064 Active slot: 1 00:13:39.064 Slot 1 Firmware Revision: 1.0 00:13:39.064 00:13:39.064 00:13:39.064 Commands Supported and Effects 00:13:39.064 ============================== 00:13:39.064 Admin Commands 00:13:39.064 -------------- 00:13:39.064 Delete I/O Submission Queue (00h): Supported 00:13:39.064 Create I/O Submission Queue (01h): Supported 00:13:39.064 Get Log Page (02h): Supported 00:13:39.064 Delete I/O Completion Queue (04h): Supported 00:13:39.064 Create I/O Completion Queue (05h): Supported 00:13:39.064 Identify (06h): Supported 00:13:39.064 Abort (08h): Supported 00:13:39.064 Set Features (09h): Supported 00:13:39.064 Get Features (0Ah): Supported 00:13:39.064 Asynchronous Event Request (0Ch): Supported 00:13:39.064 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:39.064 Directive Send (19h): Supported 00:13:39.064 Directive Receive (1Ah): Supported 00:13:39.064 Virtualization Management (1Ch): Supported 00:13:39.064 Doorbell Buffer Config (7Ch): Supported 00:13:39.064 Format NVM (80h): Supported LBA-Change 00:13:39.064 I/O Commands 00:13:39.064 ------------ 00:13:39.064 Flush (00h): Supported LBA-Change 00:13:39.064 Write (01h): Supported LBA-Change 00:13:39.064 Read (02h): Supported 00:13:39.064 Compare (05h): Supported 00:13:39.064 Write Zeroes (08h): Supported LBA-Change 00:13:39.064 Dataset Management (09h): Supported LBA-Change 00:13:39.064 Unknown (0Ch): Supported 00:13:39.064 Unknown (12h): Supported 00:13:39.064 Copy (19h): Supported LBA-Change 00:13:39.064 Unknown (1Dh): Supported LBA-Change 00:13:39.064 00:13:39.064 Error Log 00:13:39.064 ========= 00:13:39.064 00:13:39.064 Arbitration 00:13:39.064 =========== 00:13:39.064 Arbitration Burst: no limit 00:13:39.064 00:13:39.064 Power Management 00:13:39.064 ================ 00:13:39.064 Number of Power States: 1 00:13:39.064 Current Power State: Power State #0 00:13:39.064 Power State #0: 00:13:39.064 Max Power: 25.00 W 00:13:39.064 Non-Operational State: Operational 00:13:39.064 Entry Latency: 16 microseconds 00:13:39.064 Exit Latency: 4 microseconds 00:13:39.064 Relative Read Throughput: 0 00:13:39.064 Relative Read Latency: 0 00:13:39.064 Relative Write Throughput: 0 00:13:39.064 Relative Write Latency: 0 00:13:39.064 Idle Power: Not Reported 00:13:39.064 Active Power: Not Reported 00:13:39.064 Non-Operational Permissive Mode: Not Supported 00:13:39.064 00:13:39.064 Health Information 00:13:39.064 ================== 00:13:39.064 Critical Warnings: 00:13:39.064 Available Spare Space: OK 00:13:39.064 Temperature: OK 00:13:39.064 Device Reliability: OK 00:13:39.064 Read Only: No 00:13:39.064 Volatile Memory Backup: OK 00:13:39.064 Current Temperature: 323 Kelvin (50 Celsius) 00:13:39.064 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:39.064 Available Spare: 0% 00:13:39.064 Available Spare Threshold: 0% 00:13:39.064 Life Percentage Used: 0% 00:13:39.064 Data Units Read: 2198 00:13:39.064 Data Units Written: 1985 00:13:39.064 Host Read Commands: 103632 00:13:39.064 Host Write Commands: 101901 00:13:39.064 Controller Busy Time: 0 minutes 00:13:39.064 Power Cycles: 0 00:13:39.064 Power On Hours: 0 hours 00:13:39.064 Unsafe Shutdowns: 0 00:13:39.064 Unrecoverable Media Errors: 0 00:13:39.064 Lifetime Error Log Entries: 0 00:13:39.064 Warning Temperature Time: 0 minutes 00:13:39.064 Critical Temperature Time: 0 minutes 00:13:39.064 00:13:39.064 Number of Queues 00:13:39.064 ================ 00:13:39.064 Number of I/O Submission Queues: 64 00:13:39.064 Number of I/O Completion Queues: 64 00:13:39.064 00:13:39.064 ZNS Specific Controller Data 00:13:39.064 ============================ 00:13:39.064 Zone Append Size Limit: 0 00:13:39.064 00:13:39.064 00:13:39.064 Active Namespaces 00:13:39.064 ================= 00:13:39.064 Namespace ID:1 00:13:39.064 Error Recovery Timeout: Unlimited 00:13:39.064 Command Set Identifier: NVM (00h) 00:13:39.064 Deallocate: Supported 00:13:39.064 Deallocated/Unwritten Error: Supported 00:13:39.064 Deallocated Read Value: All 0x00 00:13:39.064 Deallocate in Write Zeroes: Not Supported 00:13:39.064 Deallocated Guard Field: 0xFFFF 00:13:39.064 Flush: Supported 00:13:39.064 Reservation: Not Supported 00:13:39.064 Namespace Sharing Capabilities: Private 00:13:39.064 Size (in LBAs): 1048576 (4GiB) 00:13:39.064 Capacity (in LBAs): 1048576 (4GiB) 00:13:39.064 Utilization (in LBAs): 1048576 (4GiB) 00:13:39.064 Thin Provisioning: Not Supported 00:13:39.064 Per-NS Atomic Units: No 00:13:39.064 Maximum Single Source Range Length: 128 00:13:39.064 Maximum Copy Length: 128 00:13:39.064 Maximum Source Range Count: 128 00:13:39.064 NGUID/EUI64 Never Reused: No 00:13:39.064 Namespace Write Protected: No 00:13:39.064 Number of LBA Formats: 8 00:13:39.064 Current LBA Format: LBA Format #04 00:13:39.064 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.064 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:39.064 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:39.064 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:39.064 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:39.064 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:39.064 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:39.064 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:39.064 00:13:39.064 NVM Specific Namespace Data 00:13:39.064 =========================== 00:13:39.064 Logical Block Storage Tag Mask: 0 00:13:39.064 Protection Information Capabilities: 00:13:39.064 16b Guard Protection Information Storage Tag Support: No 00:13:39.064 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:39.064 Storage Tag Check Read Support: No 00:13:39.064 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.064 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.064 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.064 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.064 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.064 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.064 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.064 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.064 Namespace ID:2 00:13:39.064 Error Recovery Timeout: Unlimited 00:13:39.064 Command Set Identifier: NVM (00h) 00:13:39.064 Deallocate: Supported 00:13:39.064 Deallocated/Unwritten Error: Supported 00:13:39.064 Deallocated Read Value: All 0x00 00:13:39.064 Deallocate in Write Zeroes: Not Supported 00:13:39.064 Deallocated Guard Field: 0xFFFF 00:13:39.064 Flush: Supported 00:13:39.064 Reservation: Not Supported 00:13:39.064 Namespace Sharing Capabilities: Private 00:13:39.064 Size (in LBAs): 1048576 (4GiB) 00:13:39.064 Capacity (in LBAs): 1048576 (4GiB) 00:13:39.064 Utilization (in LBAs): 1048576 (4GiB) 00:13:39.064 Thin Provisioning: Not Supported 00:13:39.064 Per-NS Atomic Units: No 00:13:39.064 Maximum Single Source Range Length: 128 00:13:39.064 Maximum Copy Length: 128 00:13:39.064 Maximum Source Range Count: 128 00:13:39.064 NGUID/EUI64 Never Reused: No 00:13:39.064 Namespace Write Protected: No 00:13:39.064 Number of LBA Formats: 8 00:13:39.064 Current LBA Format: LBA Format #04 00:13:39.064 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.064 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:39.064 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:39.065 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:39.065 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:39.065 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:39.065 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:39.065 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:39.065 00:13:39.065 NVM Specific Namespace Data 00:13:39.065 =========================== 00:13:39.065 Logical Block Storage Tag Mask: 0 00:13:39.065 Protection Information Capabilities: 00:13:39.065 16b Guard Protection Information Storage Tag Support: No 00:13:39.065 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:39.065 Storage Tag Check Read Support: No 00:13:39.065 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Namespace ID:3 00:13:39.065 Error Recovery Timeout: Unlimited 00:13:39.065 Command Set Identifier: NVM (00h) 00:13:39.065 Deallocate: Supported 00:13:39.065 Deallocated/Unwritten Error: Supported 00:13:39.065 Deallocated Read Value: All 0x00 00:13:39.065 Deallocate in Write Zeroes: Not Supported 00:13:39.065 Deallocated Guard Field: 0xFFFF 00:13:39.065 Flush: Supported 00:13:39.065 Reservation: Not Supported 00:13:39.065 Namespace Sharing Capabilities: Private 00:13:39.065 Size (in LBAs): 1048576 (4GiB) 00:13:39.065 Capacity (in LBAs): 1048576 (4GiB) 00:13:39.065 Utilization (in LBAs): 1048576 (4GiB) 00:13:39.065 Thin Provisioning: Not Supported 00:13:39.065 Per-NS Atomic Units: No 00:13:39.065 Maximum Single Source Range Length: 128 00:13:39.065 Maximum Copy Length: 128 00:13:39.065 Maximum Source Range Count: 128 00:13:39.065 NGUID/EUI64 Never Reused: No 00:13:39.065 Namespace Write Protected: No 00:13:39.065 Number of LBA Formats: 8 00:13:39.065 Current LBA Format: LBA Format #04 00:13:39.065 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.065 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:39.065 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:39.065 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:39.065 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:39.065 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:39.065 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:39.065 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:39.065 00:13:39.065 NVM Specific Namespace Data 00:13:39.065 =========================== 00:13:39.065 Logical Block Storage Tag Mask: 0 00:13:39.065 Protection Information Capabilities: 00:13:39.065 16b Guard Protection Information Storage Tag Support: No 00:13:39.065 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:39.065 Storage Tag Check Read Support: No 00:13:39.065 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.065 10:19:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:39.065 10:19:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:13:39.324 ===================================================== 00:13:39.324 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:39.324 ===================================================== 00:13:39.324 Controller Capabilities/Features 00:13:39.324 ================================ 00:13:39.324 Vendor ID: 1b36 00:13:39.324 Subsystem Vendor ID: 1af4 00:13:39.324 Serial Number: 12340 00:13:39.324 Model Number: QEMU NVMe Ctrl 00:13:39.324 Firmware Version: 8.0.0 00:13:39.324 Recommended Arb Burst: 6 00:13:39.324 IEEE OUI Identifier: 00 54 52 00:13:39.324 Multi-path I/O 00:13:39.324 May have multiple subsystem ports: No 00:13:39.324 May have multiple controllers: No 00:13:39.324 Associated with SR-IOV VF: No 00:13:39.324 Max Data Transfer Size: 524288 00:13:39.324 Max Number of Namespaces: 256 00:13:39.324 Max Number of I/O Queues: 64 00:13:39.324 NVMe Specification Version (VS): 1.4 00:13:39.324 NVMe Specification Version (Identify): 1.4 00:13:39.324 Maximum Queue Entries: 2048 00:13:39.324 Contiguous Queues Required: Yes 00:13:39.324 Arbitration Mechanisms Supported 00:13:39.324 Weighted Round Robin: Not Supported 00:13:39.324 Vendor Specific: Not Supported 00:13:39.324 Reset Timeout: 7500 ms 00:13:39.324 Doorbell Stride: 4 bytes 00:13:39.324 NVM Subsystem Reset: Not Supported 00:13:39.324 Command Sets Supported 00:13:39.324 NVM Command Set: Supported 00:13:39.324 Boot Partition: Not Supported 00:13:39.324 Memory Page Size Minimum: 4096 bytes 00:13:39.324 Memory Page Size Maximum: 65536 bytes 00:13:39.324 Persistent Memory Region: Not Supported 00:13:39.324 Optional Asynchronous Events Supported 00:13:39.324 Namespace Attribute Notices: Supported 00:13:39.324 Firmware Activation Notices: Not Supported 00:13:39.324 ANA Change Notices: Not Supported 00:13:39.324 PLE Aggregate Log Change Notices: Not Supported 00:13:39.324 LBA Status Info Alert Notices: Not Supported 00:13:39.324 EGE Aggregate Log Change Notices: Not Supported 00:13:39.324 Normal NVM Subsystem Shutdown event: Not Supported 00:13:39.324 Zone Descriptor Change Notices: Not Supported 00:13:39.324 Discovery Log Change Notices: Not Supported 00:13:39.324 Controller Attributes 00:13:39.324 128-bit Host Identifier: Not Supported 00:13:39.324 Non-Operational Permissive Mode: Not Supported 00:13:39.324 NVM Sets: Not Supported 00:13:39.324 Read Recovery Levels: Not Supported 00:13:39.324 Endurance Groups: Not Supported 00:13:39.324 Predictable Latency Mode: Not Supported 00:13:39.324 Traffic Based Keep ALive: Not Supported 00:13:39.324 Namespace Granularity: Not Supported 00:13:39.324 SQ Associations: Not Supported 00:13:39.324 UUID List: Not Supported 00:13:39.324 Multi-Domain Subsystem: Not Supported 00:13:39.324 Fixed Capacity Management: Not Supported 00:13:39.324 Variable Capacity Management: Not Supported 00:13:39.324 Delete Endurance Group: Not Supported 00:13:39.324 Delete NVM Set: Not Supported 00:13:39.324 Extended LBA Formats Supported: Supported 00:13:39.324 Flexible Data Placement Supported: Not Supported 00:13:39.324 00:13:39.324 Controller Memory Buffer Support 00:13:39.324 ================================ 00:13:39.324 Supported: No 00:13:39.324 00:13:39.324 Persistent Memory Region Support 00:13:39.324 ================================ 00:13:39.324 Supported: No 00:13:39.324 00:13:39.324 Admin Command Set Attributes 00:13:39.324 ============================ 00:13:39.324 Security Send/Receive: Not Supported 00:13:39.324 Format NVM: Supported 00:13:39.324 Firmware Activate/Download: Not Supported 00:13:39.324 Namespace Management: Supported 00:13:39.324 Device Self-Test: Not Supported 00:13:39.325 Directives: Supported 00:13:39.325 NVMe-MI: Not Supported 00:13:39.325 Virtualization Management: Not Supported 00:13:39.325 Doorbell Buffer Config: Supported 00:13:39.325 Get LBA Status Capability: Not Supported 00:13:39.325 Command & Feature Lockdown Capability: Not Supported 00:13:39.325 Abort Command Limit: 4 00:13:39.325 Async Event Request Limit: 4 00:13:39.325 Number of Firmware Slots: N/A 00:13:39.325 Firmware Slot 1 Read-Only: N/A 00:13:39.325 Firmware Activation Without Reset: N/A 00:13:39.325 Multiple Update Detection Support: N/A 00:13:39.325 Firmware Update Granularity: No Information Provided 00:13:39.325 Per-Namespace SMART Log: Yes 00:13:39.325 Asymmetric Namespace Access Log Page: Not Supported 00:13:39.325 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:39.325 Command Effects Log Page: Supported 00:13:39.325 Get Log Page Extended Data: Supported 00:13:39.325 Telemetry Log Pages: Not Supported 00:13:39.325 Persistent Event Log Pages: Not Supported 00:13:39.325 Supported Log Pages Log Page: May Support 00:13:39.325 Commands Supported & Effects Log Page: Not Supported 00:13:39.325 Feature Identifiers & Effects Log Page:May Support 00:13:39.325 NVMe-MI Commands & Effects Log Page: May Support 00:13:39.325 Data Area 4 for Telemetry Log: Not Supported 00:13:39.325 Error Log Page Entries Supported: 1 00:13:39.325 Keep Alive: Not Supported 00:13:39.325 00:13:39.325 NVM Command Set Attributes 00:13:39.325 ========================== 00:13:39.325 Submission Queue Entry Size 00:13:39.325 Max: 64 00:13:39.325 Min: 64 00:13:39.325 Completion Queue Entry Size 00:13:39.325 Max: 16 00:13:39.325 Min: 16 00:13:39.325 Number of Namespaces: 256 00:13:39.325 Compare Command: Supported 00:13:39.325 Write Uncorrectable Command: Not Supported 00:13:39.325 Dataset Management Command: Supported 00:13:39.325 Write Zeroes Command: Supported 00:13:39.325 Set Features Save Field: Supported 00:13:39.325 Reservations: Not Supported 00:13:39.325 Timestamp: Supported 00:13:39.325 Copy: Supported 00:13:39.325 Volatile Write Cache: Present 00:13:39.325 Atomic Write Unit (Normal): 1 00:13:39.325 Atomic Write Unit (PFail): 1 00:13:39.325 Atomic Compare & Write Unit: 1 00:13:39.325 Fused Compare & Write: Not Supported 00:13:39.325 Scatter-Gather List 00:13:39.325 SGL Command Set: Supported 00:13:39.325 SGL Keyed: Not Supported 00:13:39.325 SGL Bit Bucket Descriptor: Not Supported 00:13:39.325 SGL Metadata Pointer: Not Supported 00:13:39.325 Oversized SGL: Not Supported 00:13:39.325 SGL Metadata Address: Not Supported 00:13:39.325 SGL Offset: Not Supported 00:13:39.325 Transport SGL Data Block: Not Supported 00:13:39.325 Replay Protected Memory Block: Not Supported 00:13:39.325 00:13:39.325 Firmware Slot Information 00:13:39.325 ========================= 00:13:39.325 Active slot: 1 00:13:39.325 Slot 1 Firmware Revision: 1.0 00:13:39.325 00:13:39.325 00:13:39.325 Commands Supported and Effects 00:13:39.325 ============================== 00:13:39.325 Admin Commands 00:13:39.325 -------------- 00:13:39.325 Delete I/O Submission Queue (00h): Supported 00:13:39.325 Create I/O Submission Queue (01h): Supported 00:13:39.325 Get Log Page (02h): Supported 00:13:39.325 Delete I/O Completion Queue (04h): Supported 00:13:39.325 Create I/O Completion Queue (05h): Supported 00:13:39.325 Identify (06h): Supported 00:13:39.325 Abort (08h): Supported 00:13:39.325 Set Features (09h): Supported 00:13:39.325 Get Features (0Ah): Supported 00:13:39.325 Asynchronous Event Request (0Ch): Supported 00:13:39.325 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:39.325 Directive Send (19h): Supported 00:13:39.325 Directive Receive (1Ah): Supported 00:13:39.325 Virtualization Management (1Ch): Supported 00:13:39.325 Doorbell Buffer Config (7Ch): Supported 00:13:39.325 Format NVM (80h): Supported LBA-Change 00:13:39.325 I/O Commands 00:13:39.325 ------------ 00:13:39.325 Flush (00h): Supported LBA-Change 00:13:39.325 Write (01h): Supported LBA-Change 00:13:39.325 Read (02h): Supported 00:13:39.325 Compare (05h): Supported 00:13:39.325 Write Zeroes (08h): Supported LBA-Change 00:13:39.325 Dataset Management (09h): Supported LBA-Change 00:13:39.325 Unknown (0Ch): Supported 00:13:39.325 Unknown (12h): Supported 00:13:39.325 Copy (19h): Supported LBA-Change 00:13:39.325 Unknown (1Dh): Supported LBA-Change 00:13:39.325 00:13:39.325 Error Log 00:13:39.325 ========= 00:13:39.325 00:13:39.325 Arbitration 00:13:39.325 =========== 00:13:39.325 Arbitration Burst: no limit 00:13:39.325 00:13:39.325 Power Management 00:13:39.325 ================ 00:13:39.325 Number of Power States: 1 00:13:39.325 Current Power State: Power State #0 00:13:39.325 Power State #0: 00:13:39.325 Max Power: 25.00 W 00:13:39.325 Non-Operational State: Operational 00:13:39.325 Entry Latency: 16 microseconds 00:13:39.325 Exit Latency: 4 microseconds 00:13:39.325 Relative Read Throughput: 0 00:13:39.325 Relative Read Latency: 0 00:13:39.325 Relative Write Throughput: 0 00:13:39.325 Relative Write Latency: 0 00:13:39.583 Idle Power: Not Reported 00:13:39.583 Active Power: Not Reported 00:13:39.583 Non-Operational Permissive Mode: Not Supported 00:13:39.583 00:13:39.583 Health Information 00:13:39.583 ================== 00:13:39.583 Critical Warnings: 00:13:39.583 Available Spare Space: OK 00:13:39.583 Temperature: OK 00:13:39.583 Device Reliability: OK 00:13:39.583 Read Only: No 00:13:39.583 Volatile Memory Backup: OK 00:13:39.583 Current Temperature: 323 Kelvin (50 Celsius) 00:13:39.583 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:39.583 Available Spare: 0% 00:13:39.583 Available Spare Threshold: 0% 00:13:39.583 Life Percentage Used: 0% 00:13:39.583 Data Units Read: 704 00:13:39.583 Data Units Written: 632 00:13:39.583 Host Read Commands: 34142 00:13:39.583 Host Write Commands: 33928 00:13:39.583 Controller Busy Time: 0 minutes 00:13:39.583 Power Cycles: 0 00:13:39.583 Power On Hours: 0 hours 00:13:39.583 Unsafe Shutdowns: 0 00:13:39.583 Unrecoverable Media Errors: 0 00:13:39.583 Lifetime Error Log Entries: 0 00:13:39.583 Warning Temperature Time: 0 minutes 00:13:39.583 Critical Temperature Time: 0 minutes 00:13:39.583 00:13:39.583 Number of Queues 00:13:39.583 ================ 00:13:39.583 Number of I/O Submission Queues: 64 00:13:39.583 Number of I/O Completion Queues: 64 00:13:39.583 00:13:39.583 ZNS Specific Controller Data 00:13:39.583 ============================ 00:13:39.583 Zone Append Size Limit: 0 00:13:39.583 00:13:39.583 00:13:39.583 Active Namespaces 00:13:39.583 ================= 00:13:39.583 Namespace ID:1 00:13:39.583 Error Recovery Timeout: Unlimited 00:13:39.583 Command Set Identifier: NVM (00h) 00:13:39.583 Deallocate: Supported 00:13:39.583 Deallocated/Unwritten Error: Supported 00:13:39.583 Deallocated Read Value: All 0x00 00:13:39.583 Deallocate in Write Zeroes: Not Supported 00:13:39.583 Deallocated Guard Field: 0xFFFF 00:13:39.583 Flush: Supported 00:13:39.583 Reservation: Not Supported 00:13:39.583 Metadata Transferred as: Separate Metadata Buffer 00:13:39.583 Namespace Sharing Capabilities: Private 00:13:39.583 Size (in LBAs): 1548666 (5GiB) 00:13:39.583 Capacity (in LBAs): 1548666 (5GiB) 00:13:39.583 Utilization (in LBAs): 1548666 (5GiB) 00:13:39.583 Thin Provisioning: Not Supported 00:13:39.583 Per-NS Atomic Units: No 00:13:39.583 Maximum Single Source Range Length: 128 00:13:39.583 Maximum Copy Length: 128 00:13:39.583 Maximum Source Range Count: 128 00:13:39.583 NGUID/EUI64 Never Reused: No 00:13:39.583 Namespace Write Protected: No 00:13:39.583 Number of LBA Formats: 8 00:13:39.583 Current LBA Format: LBA Format #07 00:13:39.583 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.583 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:39.583 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:39.583 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:39.583 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:39.583 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:39.583 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:39.583 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:39.583 00:13:39.583 NVM Specific Namespace Data 00:13:39.583 =========================== 00:13:39.583 Logical Block Storage Tag Mask: 0 00:13:39.583 Protection Information Capabilities: 00:13:39.583 16b Guard Protection Information Storage Tag Support: No 00:13:39.583 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:39.583 Storage Tag Check Read Support: No 00:13:39.583 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.583 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.583 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.583 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.583 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.583 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.583 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.583 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.583 10:19:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:39.583 10:19:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:13:39.842 ===================================================== 00:13:39.842 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:39.842 ===================================================== 00:13:39.842 Controller Capabilities/Features 00:13:39.842 ================================ 00:13:39.842 Vendor ID: 1b36 00:13:39.842 Subsystem Vendor ID: 1af4 00:13:39.842 Serial Number: 12341 00:13:39.842 Model Number: QEMU NVMe Ctrl 00:13:39.842 Firmware Version: 8.0.0 00:13:39.842 Recommended Arb Burst: 6 00:13:39.842 IEEE OUI Identifier: 00 54 52 00:13:39.842 Multi-path I/O 00:13:39.842 May have multiple subsystem ports: No 00:13:39.842 May have multiple controllers: No 00:13:39.842 Associated with SR-IOV VF: No 00:13:39.842 Max Data Transfer Size: 524288 00:13:39.842 Max Number of Namespaces: 256 00:13:39.842 Max Number of I/O Queues: 64 00:13:39.842 NVMe Specification Version (VS): 1.4 00:13:39.842 NVMe Specification Version (Identify): 1.4 00:13:39.842 Maximum Queue Entries: 2048 00:13:39.842 Contiguous Queues Required: Yes 00:13:39.842 Arbitration Mechanisms Supported 00:13:39.842 Weighted Round Robin: Not Supported 00:13:39.842 Vendor Specific: Not Supported 00:13:39.842 Reset Timeout: 7500 ms 00:13:39.842 Doorbell Stride: 4 bytes 00:13:39.842 NVM Subsystem Reset: Not Supported 00:13:39.842 Command Sets Supported 00:13:39.842 NVM Command Set: Supported 00:13:39.842 Boot Partition: Not Supported 00:13:39.842 Memory Page Size Minimum: 4096 bytes 00:13:39.842 Memory Page Size Maximum: 65536 bytes 00:13:39.842 Persistent Memory Region: Not Supported 00:13:39.842 Optional Asynchronous Events Supported 00:13:39.842 Namespace Attribute Notices: Supported 00:13:39.842 Firmware Activation Notices: Not Supported 00:13:39.842 ANA Change Notices: Not Supported 00:13:39.842 PLE Aggregate Log Change Notices: Not Supported 00:13:39.842 LBA Status Info Alert Notices: Not Supported 00:13:39.842 EGE Aggregate Log Change Notices: Not Supported 00:13:39.842 Normal NVM Subsystem Shutdown event: Not Supported 00:13:39.842 Zone Descriptor Change Notices: Not Supported 00:13:39.842 Discovery Log Change Notices: Not Supported 00:13:39.842 Controller Attributes 00:13:39.842 128-bit Host Identifier: Not Supported 00:13:39.842 Non-Operational Permissive Mode: Not Supported 00:13:39.842 NVM Sets: Not Supported 00:13:39.842 Read Recovery Levels: Not Supported 00:13:39.842 Endurance Groups: Not Supported 00:13:39.842 Predictable Latency Mode: Not Supported 00:13:39.842 Traffic Based Keep ALive: Not Supported 00:13:39.842 Namespace Granularity: Not Supported 00:13:39.842 SQ Associations: Not Supported 00:13:39.842 UUID List: Not Supported 00:13:39.842 Multi-Domain Subsystem: Not Supported 00:13:39.842 Fixed Capacity Management: Not Supported 00:13:39.842 Variable Capacity Management: Not Supported 00:13:39.842 Delete Endurance Group: Not Supported 00:13:39.842 Delete NVM Set: Not Supported 00:13:39.842 Extended LBA Formats Supported: Supported 00:13:39.842 Flexible Data Placement Supported: Not Supported 00:13:39.842 00:13:39.842 Controller Memory Buffer Support 00:13:39.842 ================================ 00:13:39.842 Supported: No 00:13:39.842 00:13:39.842 Persistent Memory Region Support 00:13:39.842 ================================ 00:13:39.842 Supported: No 00:13:39.842 00:13:39.842 Admin Command Set Attributes 00:13:39.842 ============================ 00:13:39.842 Security Send/Receive: Not Supported 00:13:39.842 Format NVM: Supported 00:13:39.842 Firmware Activate/Download: Not Supported 00:13:39.842 Namespace Management: Supported 00:13:39.842 Device Self-Test: Not Supported 00:13:39.842 Directives: Supported 00:13:39.842 NVMe-MI: Not Supported 00:13:39.842 Virtualization Management: Not Supported 00:13:39.842 Doorbell Buffer Config: Supported 00:13:39.842 Get LBA Status Capability: Not Supported 00:13:39.842 Command & Feature Lockdown Capability: Not Supported 00:13:39.842 Abort Command Limit: 4 00:13:39.842 Async Event Request Limit: 4 00:13:39.842 Number of Firmware Slots: N/A 00:13:39.842 Firmware Slot 1 Read-Only: N/A 00:13:39.842 Firmware Activation Without Reset: N/A 00:13:39.842 Multiple Update Detection Support: N/A 00:13:39.842 Firmware Update Granularity: No Information Provided 00:13:39.842 Per-Namespace SMART Log: Yes 00:13:39.842 Asymmetric Namespace Access Log Page: Not Supported 00:13:39.842 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:39.842 Command Effects Log Page: Supported 00:13:39.842 Get Log Page Extended Data: Supported 00:13:39.842 Telemetry Log Pages: Not Supported 00:13:39.842 Persistent Event Log Pages: Not Supported 00:13:39.842 Supported Log Pages Log Page: May Support 00:13:39.842 Commands Supported & Effects Log Page: Not Supported 00:13:39.842 Feature Identifiers & Effects Log Page:May Support 00:13:39.842 NVMe-MI Commands & Effects Log Page: May Support 00:13:39.842 Data Area 4 for Telemetry Log: Not Supported 00:13:39.842 Error Log Page Entries Supported: 1 00:13:39.842 Keep Alive: Not Supported 00:13:39.842 00:13:39.842 NVM Command Set Attributes 00:13:39.842 ========================== 00:13:39.842 Submission Queue Entry Size 00:13:39.842 Max: 64 00:13:39.842 Min: 64 00:13:39.842 Completion Queue Entry Size 00:13:39.842 Max: 16 00:13:39.842 Min: 16 00:13:39.842 Number of Namespaces: 256 00:13:39.842 Compare Command: Supported 00:13:39.842 Write Uncorrectable Command: Not Supported 00:13:39.842 Dataset Management Command: Supported 00:13:39.842 Write Zeroes Command: Supported 00:13:39.842 Set Features Save Field: Supported 00:13:39.842 Reservations: Not Supported 00:13:39.842 Timestamp: Supported 00:13:39.842 Copy: Supported 00:13:39.842 Volatile Write Cache: Present 00:13:39.842 Atomic Write Unit (Normal): 1 00:13:39.842 Atomic Write Unit (PFail): 1 00:13:39.842 Atomic Compare & Write Unit: 1 00:13:39.842 Fused Compare & Write: Not Supported 00:13:39.842 Scatter-Gather List 00:13:39.842 SGL Command Set: Supported 00:13:39.842 SGL Keyed: Not Supported 00:13:39.842 SGL Bit Bucket Descriptor: Not Supported 00:13:39.842 SGL Metadata Pointer: Not Supported 00:13:39.842 Oversized SGL: Not Supported 00:13:39.842 SGL Metadata Address: Not Supported 00:13:39.842 SGL Offset: Not Supported 00:13:39.842 Transport SGL Data Block: Not Supported 00:13:39.842 Replay Protected Memory Block: Not Supported 00:13:39.842 00:13:39.842 Firmware Slot Information 00:13:39.842 ========================= 00:13:39.842 Active slot: 1 00:13:39.842 Slot 1 Firmware Revision: 1.0 00:13:39.842 00:13:39.842 00:13:39.842 Commands Supported and Effects 00:13:39.842 ============================== 00:13:39.842 Admin Commands 00:13:39.842 -------------- 00:13:39.842 Delete I/O Submission Queue (00h): Supported 00:13:39.842 Create I/O Submission Queue (01h): Supported 00:13:39.842 Get Log Page (02h): Supported 00:13:39.842 Delete I/O Completion Queue (04h): Supported 00:13:39.842 Create I/O Completion Queue (05h): Supported 00:13:39.842 Identify (06h): Supported 00:13:39.842 Abort (08h): Supported 00:13:39.842 Set Features (09h): Supported 00:13:39.842 Get Features (0Ah): Supported 00:13:39.842 Asynchronous Event Request (0Ch): Supported 00:13:39.842 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:39.842 Directive Send (19h): Supported 00:13:39.842 Directive Receive (1Ah): Supported 00:13:39.842 Virtualization Management (1Ch): Supported 00:13:39.842 Doorbell Buffer Config (7Ch): Supported 00:13:39.842 Format NVM (80h): Supported LBA-Change 00:13:39.842 I/O Commands 00:13:39.842 ------------ 00:13:39.842 Flush (00h): Supported LBA-Change 00:13:39.842 Write (01h): Supported LBA-Change 00:13:39.842 Read (02h): Supported 00:13:39.842 Compare (05h): Supported 00:13:39.842 Write Zeroes (08h): Supported LBA-Change 00:13:39.842 Dataset Management (09h): Supported LBA-Change 00:13:39.842 Unknown (0Ch): Supported 00:13:39.842 Unknown (12h): Supported 00:13:39.843 Copy (19h): Supported LBA-Change 00:13:39.843 Unknown (1Dh): Supported LBA-Change 00:13:39.843 00:13:39.843 Error Log 00:13:39.843 ========= 00:13:39.843 00:13:39.843 Arbitration 00:13:39.843 =========== 00:13:39.843 Arbitration Burst: no limit 00:13:39.843 00:13:39.843 Power Management 00:13:39.843 ================ 00:13:39.843 Number of Power States: 1 00:13:39.843 Current Power State: Power State #0 00:13:39.843 Power State #0: 00:13:39.843 Max Power: 25.00 W 00:13:39.843 Non-Operational State: Operational 00:13:39.843 Entry Latency: 16 microseconds 00:13:39.843 Exit Latency: 4 microseconds 00:13:39.843 Relative Read Throughput: 0 00:13:39.843 Relative Read Latency: 0 00:13:39.843 Relative Write Throughput: 0 00:13:39.843 Relative Write Latency: 0 00:13:39.843 Idle Power: Not Reported 00:13:39.843 Active Power: Not Reported 00:13:39.843 Non-Operational Permissive Mode: Not Supported 00:13:39.843 00:13:39.843 Health Information 00:13:39.843 ================== 00:13:39.843 Critical Warnings: 00:13:39.843 Available Spare Space: OK 00:13:39.843 Temperature: OK 00:13:39.843 Device Reliability: OK 00:13:39.843 Read Only: No 00:13:39.843 Volatile Memory Backup: OK 00:13:39.843 Current Temperature: 323 Kelvin (50 Celsius) 00:13:39.843 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:39.843 Available Spare: 0% 00:13:39.843 Available Spare Threshold: 0% 00:13:39.843 Life Percentage Used: 0% 00:13:39.843 Data Units Read: 1071 00:13:39.843 Data Units Written: 938 00:13:39.843 Host Read Commands: 50728 00:13:39.843 Host Write Commands: 49516 00:13:39.843 Controller Busy Time: 0 minutes 00:13:39.843 Power Cycles: 0 00:13:39.843 Power On Hours: 0 hours 00:13:39.843 Unsafe Shutdowns: 0 00:13:39.843 Unrecoverable Media Errors: 0 00:13:39.843 Lifetime Error Log Entries: 0 00:13:39.843 Warning Temperature Time: 0 minutes 00:13:39.843 Critical Temperature Time: 0 minutes 00:13:39.843 00:13:39.843 Number of Queues 00:13:39.843 ================ 00:13:39.843 Number of I/O Submission Queues: 64 00:13:39.843 Number of I/O Completion Queues: 64 00:13:39.843 00:13:39.843 ZNS Specific Controller Data 00:13:39.843 ============================ 00:13:39.843 Zone Append Size Limit: 0 00:13:39.843 00:13:39.843 00:13:39.843 Active Namespaces 00:13:39.843 ================= 00:13:39.843 Namespace ID:1 00:13:39.843 Error Recovery Timeout: Unlimited 00:13:39.843 Command Set Identifier: NVM (00h) 00:13:39.843 Deallocate: Supported 00:13:39.843 Deallocated/Unwritten Error: Supported 00:13:39.843 Deallocated Read Value: All 0x00 00:13:39.843 Deallocate in Write Zeroes: Not Supported 00:13:39.843 Deallocated Guard Field: 0xFFFF 00:13:39.843 Flush: Supported 00:13:39.843 Reservation: Not Supported 00:13:39.843 Namespace Sharing Capabilities: Private 00:13:39.843 Size (in LBAs): 1310720 (5GiB) 00:13:39.843 Capacity (in LBAs): 1310720 (5GiB) 00:13:39.843 Utilization (in LBAs): 1310720 (5GiB) 00:13:39.843 Thin Provisioning: Not Supported 00:13:39.843 Per-NS Atomic Units: No 00:13:39.843 Maximum Single Source Range Length: 128 00:13:39.843 Maximum Copy Length: 128 00:13:39.843 Maximum Source Range Count: 128 00:13:39.843 NGUID/EUI64 Never Reused: No 00:13:39.843 Namespace Write Protected: No 00:13:39.843 Number of LBA Formats: 8 00:13:39.843 Current LBA Format: LBA Format #04 00:13:39.843 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:39.843 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:39.843 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:39.843 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:39.843 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:39.843 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:39.843 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:39.843 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:39.843 00:13:39.843 NVM Specific Namespace Data 00:13:39.843 =========================== 00:13:39.843 Logical Block Storage Tag Mask: 0 00:13:39.843 Protection Information Capabilities: 00:13:39.843 16b Guard Protection Information Storage Tag Support: No 00:13:39.843 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:39.843 Storage Tag Check Read Support: No 00:13:39.843 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.843 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.843 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.843 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.843 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.843 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.843 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.843 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:39.843 10:19:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:39.843 10:19:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:13:40.102 ===================================================== 00:13:40.102 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:40.102 ===================================================== 00:13:40.102 Controller Capabilities/Features 00:13:40.102 ================================ 00:13:40.102 Vendor ID: 1b36 00:13:40.102 Subsystem Vendor ID: 1af4 00:13:40.102 Serial Number: 12342 00:13:40.102 Model Number: QEMU NVMe Ctrl 00:13:40.102 Firmware Version: 8.0.0 00:13:40.102 Recommended Arb Burst: 6 00:13:40.102 IEEE OUI Identifier: 00 54 52 00:13:40.102 Multi-path I/O 00:13:40.102 May have multiple subsystem ports: No 00:13:40.102 May have multiple controllers: No 00:13:40.102 Associated with SR-IOV VF: No 00:13:40.102 Max Data Transfer Size: 524288 00:13:40.102 Max Number of Namespaces: 256 00:13:40.102 Max Number of I/O Queues: 64 00:13:40.102 NVMe Specification Version (VS): 1.4 00:13:40.102 NVMe Specification Version (Identify): 1.4 00:13:40.102 Maximum Queue Entries: 2048 00:13:40.102 Contiguous Queues Required: Yes 00:13:40.102 Arbitration Mechanisms Supported 00:13:40.102 Weighted Round Robin: Not Supported 00:13:40.102 Vendor Specific: Not Supported 00:13:40.102 Reset Timeout: 7500 ms 00:13:40.102 Doorbell Stride: 4 bytes 00:13:40.102 NVM Subsystem Reset: Not Supported 00:13:40.102 Command Sets Supported 00:13:40.102 NVM Command Set: Supported 00:13:40.102 Boot Partition: Not Supported 00:13:40.102 Memory Page Size Minimum: 4096 bytes 00:13:40.102 Memory Page Size Maximum: 65536 bytes 00:13:40.102 Persistent Memory Region: Not Supported 00:13:40.102 Optional Asynchronous Events Supported 00:13:40.102 Namespace Attribute Notices: Supported 00:13:40.102 Firmware Activation Notices: Not Supported 00:13:40.102 ANA Change Notices: Not Supported 00:13:40.102 PLE Aggregate Log Change Notices: Not Supported 00:13:40.102 LBA Status Info Alert Notices: Not Supported 00:13:40.102 EGE Aggregate Log Change Notices: Not Supported 00:13:40.102 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.102 Zone Descriptor Change Notices: Not Supported 00:13:40.102 Discovery Log Change Notices: Not Supported 00:13:40.102 Controller Attributes 00:13:40.102 128-bit Host Identifier: Not Supported 00:13:40.102 Non-Operational Permissive Mode: Not Supported 00:13:40.102 NVM Sets: Not Supported 00:13:40.102 Read Recovery Levels: Not Supported 00:13:40.102 Endurance Groups: Not Supported 00:13:40.102 Predictable Latency Mode: Not Supported 00:13:40.102 Traffic Based Keep ALive: Not Supported 00:13:40.102 Namespace Granularity: Not Supported 00:13:40.102 SQ Associations: Not Supported 00:13:40.102 UUID List: Not Supported 00:13:40.102 Multi-Domain Subsystem: Not Supported 00:13:40.102 Fixed Capacity Management: Not Supported 00:13:40.102 Variable Capacity Management: Not Supported 00:13:40.102 Delete Endurance Group: Not Supported 00:13:40.102 Delete NVM Set: Not Supported 00:13:40.102 Extended LBA Formats Supported: Supported 00:13:40.102 Flexible Data Placement Supported: Not Supported 00:13:40.102 00:13:40.102 Controller Memory Buffer Support 00:13:40.102 ================================ 00:13:40.102 Supported: No 00:13:40.102 00:13:40.102 Persistent Memory Region Support 00:13:40.102 ================================ 00:13:40.102 Supported: No 00:13:40.102 00:13:40.102 Admin Command Set Attributes 00:13:40.102 ============================ 00:13:40.102 Security Send/Receive: Not Supported 00:13:40.102 Format NVM: Supported 00:13:40.102 Firmware Activate/Download: Not Supported 00:13:40.102 Namespace Management: Supported 00:13:40.102 Device Self-Test: Not Supported 00:13:40.102 Directives: Supported 00:13:40.102 NVMe-MI: Not Supported 00:13:40.102 Virtualization Management: Not Supported 00:13:40.102 Doorbell Buffer Config: Supported 00:13:40.102 Get LBA Status Capability: Not Supported 00:13:40.102 Command & Feature Lockdown Capability: Not Supported 00:13:40.102 Abort Command Limit: 4 00:13:40.102 Async Event Request Limit: 4 00:13:40.102 Number of Firmware Slots: N/A 00:13:40.102 Firmware Slot 1 Read-Only: N/A 00:13:40.102 Firmware Activation Without Reset: N/A 00:13:40.102 Multiple Update Detection Support: N/A 00:13:40.102 Firmware Update Granularity: No Information Provided 00:13:40.102 Per-Namespace SMART Log: Yes 00:13:40.102 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.102 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:40.102 Command Effects Log Page: Supported 00:13:40.102 Get Log Page Extended Data: Supported 00:13:40.102 Telemetry Log Pages: Not Supported 00:13:40.102 Persistent Event Log Pages: Not Supported 00:13:40.102 Supported Log Pages Log Page: May Support 00:13:40.102 Commands Supported & Effects Log Page: Not Supported 00:13:40.102 Feature Identifiers & Effects Log Page:May Support 00:13:40.102 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.102 Data Area 4 for Telemetry Log: Not Supported 00:13:40.102 Error Log Page Entries Supported: 1 00:13:40.102 Keep Alive: Not Supported 00:13:40.102 00:13:40.102 NVM Command Set Attributes 00:13:40.102 ========================== 00:13:40.102 Submission Queue Entry Size 00:13:40.102 Max: 64 00:13:40.102 Min: 64 00:13:40.102 Completion Queue Entry Size 00:13:40.102 Max: 16 00:13:40.102 Min: 16 00:13:40.102 Number of Namespaces: 256 00:13:40.102 Compare Command: Supported 00:13:40.102 Write Uncorrectable Command: Not Supported 00:13:40.102 Dataset Management Command: Supported 00:13:40.102 Write Zeroes Command: Supported 00:13:40.102 Set Features Save Field: Supported 00:13:40.102 Reservations: Not Supported 00:13:40.102 Timestamp: Supported 00:13:40.102 Copy: Supported 00:13:40.102 Volatile Write Cache: Present 00:13:40.102 Atomic Write Unit (Normal): 1 00:13:40.102 Atomic Write Unit (PFail): 1 00:13:40.102 Atomic Compare & Write Unit: 1 00:13:40.102 Fused Compare & Write: Not Supported 00:13:40.102 Scatter-Gather List 00:13:40.102 SGL Command Set: Supported 00:13:40.102 SGL Keyed: Not Supported 00:13:40.102 SGL Bit Bucket Descriptor: Not Supported 00:13:40.102 SGL Metadata Pointer: Not Supported 00:13:40.102 Oversized SGL: Not Supported 00:13:40.102 SGL Metadata Address: Not Supported 00:13:40.102 SGL Offset: Not Supported 00:13:40.102 Transport SGL Data Block: Not Supported 00:13:40.102 Replay Protected Memory Block: Not Supported 00:13:40.102 00:13:40.102 Firmware Slot Information 00:13:40.102 ========================= 00:13:40.102 Active slot: 1 00:13:40.102 Slot 1 Firmware Revision: 1.0 00:13:40.102 00:13:40.102 00:13:40.102 Commands Supported and Effects 00:13:40.102 ============================== 00:13:40.102 Admin Commands 00:13:40.102 -------------- 00:13:40.102 Delete I/O Submission Queue (00h): Supported 00:13:40.102 Create I/O Submission Queue (01h): Supported 00:13:40.102 Get Log Page (02h): Supported 00:13:40.102 Delete I/O Completion Queue (04h): Supported 00:13:40.102 Create I/O Completion Queue (05h): Supported 00:13:40.102 Identify (06h): Supported 00:13:40.102 Abort (08h): Supported 00:13:40.102 Set Features (09h): Supported 00:13:40.102 Get Features (0Ah): Supported 00:13:40.102 Asynchronous Event Request (0Ch): Supported 00:13:40.102 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:40.102 Directive Send (19h): Supported 00:13:40.102 Directive Receive (1Ah): Supported 00:13:40.102 Virtualization Management (1Ch): Supported 00:13:40.102 Doorbell Buffer Config (7Ch): Supported 00:13:40.102 Format NVM (80h): Supported LBA-Change 00:13:40.102 I/O Commands 00:13:40.102 ------------ 00:13:40.102 Flush (00h): Supported LBA-Change 00:13:40.102 Write (01h): Supported LBA-Change 00:13:40.102 Read (02h): Supported 00:13:40.102 Compare (05h): Supported 00:13:40.102 Write Zeroes (08h): Supported LBA-Change 00:13:40.102 Dataset Management (09h): Supported LBA-Change 00:13:40.102 Unknown (0Ch): Supported 00:13:40.102 Unknown (12h): Supported 00:13:40.102 Copy (19h): Supported LBA-Change 00:13:40.102 Unknown (1Dh): Supported LBA-Change 00:13:40.102 00:13:40.102 Error Log 00:13:40.102 ========= 00:13:40.102 00:13:40.102 Arbitration 00:13:40.102 =========== 00:13:40.102 Arbitration Burst: no limit 00:13:40.102 00:13:40.102 Power Management 00:13:40.102 ================ 00:13:40.102 Number of Power States: 1 00:13:40.102 Current Power State: Power State #0 00:13:40.102 Power State #0: 00:13:40.102 Max Power: 25.00 W 00:13:40.102 Non-Operational State: Operational 00:13:40.102 Entry Latency: 16 microseconds 00:13:40.102 Exit Latency: 4 microseconds 00:13:40.102 Relative Read Throughput: 0 00:13:40.102 Relative Read Latency: 0 00:13:40.103 Relative Write Throughput: 0 00:13:40.103 Relative Write Latency: 0 00:13:40.103 Idle Power: Not Reported 00:13:40.103 Active Power: Not Reported 00:13:40.103 Non-Operational Permissive Mode: Not Supported 00:13:40.103 00:13:40.103 Health Information 00:13:40.103 ================== 00:13:40.103 Critical Warnings: 00:13:40.103 Available Spare Space: OK 00:13:40.103 Temperature: OK 00:13:40.103 Device Reliability: OK 00:13:40.103 Read Only: No 00:13:40.103 Volatile Memory Backup: OK 00:13:40.103 Current Temperature: 323 Kelvin (50 Celsius) 00:13:40.103 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:40.103 Available Spare: 0% 00:13:40.103 Available Spare Threshold: 0% 00:13:40.103 Life Percentage Used: 0% 00:13:40.103 Data Units Read: 2198 00:13:40.103 Data Units Written: 1985 00:13:40.103 Host Read Commands: 103632 00:13:40.103 Host Write Commands: 101901 00:13:40.103 Controller Busy Time: 0 minutes 00:13:40.103 Power Cycles: 0 00:13:40.103 Power On Hours: 0 hours 00:13:40.103 Unsafe Shutdowns: 0 00:13:40.103 Unrecoverable Media Errors: 0 00:13:40.103 Lifetime Error Log Entries: 0 00:13:40.103 Warning Temperature Time: 0 minutes 00:13:40.103 Critical Temperature Time: 0 minutes 00:13:40.103 00:13:40.103 Number of Queues 00:13:40.103 ================ 00:13:40.103 Number of I/O Submission Queues: 64 00:13:40.103 Number of I/O Completion Queues: 64 00:13:40.103 00:13:40.103 ZNS Specific Controller Data 00:13:40.103 ============================ 00:13:40.103 Zone Append Size Limit: 0 00:13:40.103 00:13:40.103 00:13:40.103 Active Namespaces 00:13:40.103 ================= 00:13:40.103 Namespace ID:1 00:13:40.103 Error Recovery Timeout: Unlimited 00:13:40.103 Command Set Identifier: NVM (00h) 00:13:40.103 Deallocate: Supported 00:13:40.103 Deallocated/Unwritten Error: Supported 00:13:40.103 Deallocated Read Value: All 0x00 00:13:40.103 Deallocate in Write Zeroes: Not Supported 00:13:40.103 Deallocated Guard Field: 0xFFFF 00:13:40.103 Flush: Supported 00:13:40.103 Reservation: Not Supported 00:13:40.103 Namespace Sharing Capabilities: Private 00:13:40.103 Size (in LBAs): 1048576 (4GiB) 00:13:40.103 Capacity (in LBAs): 1048576 (4GiB) 00:13:40.103 Utilization (in LBAs): 1048576 (4GiB) 00:13:40.103 Thin Provisioning: Not Supported 00:13:40.103 Per-NS Atomic Units: No 00:13:40.103 Maximum Single Source Range Length: 128 00:13:40.103 Maximum Copy Length: 128 00:13:40.103 Maximum Source Range Count: 128 00:13:40.103 NGUID/EUI64 Never Reused: No 00:13:40.103 Namespace Write Protected: No 00:13:40.103 Number of LBA Formats: 8 00:13:40.103 Current LBA Format: LBA Format #04 00:13:40.103 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.103 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.103 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.103 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.103 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.103 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.103 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.103 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.103 00:13:40.103 NVM Specific Namespace Data 00:13:40.103 =========================== 00:13:40.103 Logical Block Storage Tag Mask: 0 00:13:40.103 Protection Information Capabilities: 00:13:40.103 16b Guard Protection Information Storage Tag Support: No 00:13:40.103 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.103 Storage Tag Check Read Support: No 00:13:40.103 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Namespace ID:2 00:13:40.103 Error Recovery Timeout: Unlimited 00:13:40.103 Command Set Identifier: NVM (00h) 00:13:40.103 Deallocate: Supported 00:13:40.103 Deallocated/Unwritten Error: Supported 00:13:40.103 Deallocated Read Value: All 0x00 00:13:40.103 Deallocate in Write Zeroes: Not Supported 00:13:40.103 Deallocated Guard Field: 0xFFFF 00:13:40.103 Flush: Supported 00:13:40.103 Reservation: Not Supported 00:13:40.103 Namespace Sharing Capabilities: Private 00:13:40.103 Size (in LBAs): 1048576 (4GiB) 00:13:40.103 Capacity (in LBAs): 1048576 (4GiB) 00:13:40.103 Utilization (in LBAs): 1048576 (4GiB) 00:13:40.103 Thin Provisioning: Not Supported 00:13:40.103 Per-NS Atomic Units: No 00:13:40.103 Maximum Single Source Range Length: 128 00:13:40.103 Maximum Copy Length: 128 00:13:40.103 Maximum Source Range Count: 128 00:13:40.103 NGUID/EUI64 Never Reused: No 00:13:40.103 Namespace Write Protected: No 00:13:40.103 Number of LBA Formats: 8 00:13:40.103 Current LBA Format: LBA Format #04 00:13:40.103 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.103 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.103 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.103 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.103 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.103 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.103 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.103 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.103 00:13:40.103 NVM Specific Namespace Data 00:13:40.103 =========================== 00:13:40.103 Logical Block Storage Tag Mask: 0 00:13:40.103 Protection Information Capabilities: 00:13:40.103 16b Guard Protection Information Storage Tag Support: No 00:13:40.103 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.103 Storage Tag Check Read Support: No 00:13:40.103 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Namespace ID:3 00:13:40.103 Error Recovery Timeout: Unlimited 00:13:40.103 Command Set Identifier: NVM (00h) 00:13:40.103 Deallocate: Supported 00:13:40.103 Deallocated/Unwritten Error: Supported 00:13:40.103 Deallocated Read Value: All 0x00 00:13:40.103 Deallocate in Write Zeroes: Not Supported 00:13:40.103 Deallocated Guard Field: 0xFFFF 00:13:40.103 Flush: Supported 00:13:40.103 Reservation: Not Supported 00:13:40.103 Namespace Sharing Capabilities: Private 00:13:40.103 Size (in LBAs): 1048576 (4GiB) 00:13:40.103 Capacity (in LBAs): 1048576 (4GiB) 00:13:40.103 Utilization (in LBAs): 1048576 (4GiB) 00:13:40.103 Thin Provisioning: Not Supported 00:13:40.103 Per-NS Atomic Units: No 00:13:40.103 Maximum Single Source Range Length: 128 00:13:40.103 Maximum Copy Length: 128 00:13:40.103 Maximum Source Range Count: 128 00:13:40.103 NGUID/EUI64 Never Reused: No 00:13:40.103 Namespace Write Protected: No 00:13:40.103 Number of LBA Formats: 8 00:13:40.103 Current LBA Format: LBA Format #04 00:13:40.103 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.103 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.103 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.103 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.103 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.103 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.103 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.103 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.103 00:13:40.103 NVM Specific Namespace Data 00:13:40.103 =========================== 00:13:40.103 Logical Block Storage Tag Mask: 0 00:13:40.103 Protection Information Capabilities: 00:13:40.103 16b Guard Protection Information Storage Tag Support: No 00:13:40.103 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.103 Storage Tag Check Read Support: No 00:13:40.103 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.103 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.104 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.104 10:19:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:40.104 10:19:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:13:40.670 ===================================================== 00:13:40.670 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:40.670 ===================================================== 00:13:40.670 Controller Capabilities/Features 00:13:40.670 ================================ 00:13:40.670 Vendor ID: 1b36 00:13:40.670 Subsystem Vendor ID: 1af4 00:13:40.670 Serial Number: 12343 00:13:40.670 Model Number: QEMU NVMe Ctrl 00:13:40.670 Firmware Version: 8.0.0 00:13:40.670 Recommended Arb Burst: 6 00:13:40.670 IEEE OUI Identifier: 00 54 52 00:13:40.670 Multi-path I/O 00:13:40.670 May have multiple subsystem ports: No 00:13:40.670 May have multiple controllers: Yes 00:13:40.670 Associated with SR-IOV VF: No 00:13:40.670 Max Data Transfer Size: 524288 00:13:40.670 Max Number of Namespaces: 256 00:13:40.670 Max Number of I/O Queues: 64 00:13:40.670 NVMe Specification Version (VS): 1.4 00:13:40.670 NVMe Specification Version (Identify): 1.4 00:13:40.670 Maximum Queue Entries: 2048 00:13:40.670 Contiguous Queues Required: Yes 00:13:40.670 Arbitration Mechanisms Supported 00:13:40.670 Weighted Round Robin: Not Supported 00:13:40.670 Vendor Specific: Not Supported 00:13:40.670 Reset Timeout: 7500 ms 00:13:40.670 Doorbell Stride: 4 bytes 00:13:40.670 NVM Subsystem Reset: Not Supported 00:13:40.670 Command Sets Supported 00:13:40.670 NVM Command Set: Supported 00:13:40.670 Boot Partition: Not Supported 00:13:40.670 Memory Page Size Minimum: 4096 bytes 00:13:40.670 Memory Page Size Maximum: 65536 bytes 00:13:40.670 Persistent Memory Region: Not Supported 00:13:40.670 Optional Asynchronous Events Supported 00:13:40.670 Namespace Attribute Notices: Supported 00:13:40.670 Firmware Activation Notices: Not Supported 00:13:40.671 ANA Change Notices: Not Supported 00:13:40.671 PLE Aggregate Log Change Notices: Not Supported 00:13:40.671 LBA Status Info Alert Notices: Not Supported 00:13:40.671 EGE Aggregate Log Change Notices: Not Supported 00:13:40.671 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.671 Zone Descriptor Change Notices: Not Supported 00:13:40.671 Discovery Log Change Notices: Not Supported 00:13:40.671 Controller Attributes 00:13:40.671 128-bit Host Identifier: Not Supported 00:13:40.671 Non-Operational Permissive Mode: Not Supported 00:13:40.671 NVM Sets: Not Supported 00:13:40.671 Read Recovery Levels: Not Supported 00:13:40.671 Endurance Groups: Supported 00:13:40.671 Predictable Latency Mode: Not Supported 00:13:40.671 Traffic Based Keep ALive: Not Supported 00:13:40.671 Namespace Granularity: Not Supported 00:13:40.671 SQ Associations: Not Supported 00:13:40.671 UUID List: Not Supported 00:13:40.671 Multi-Domain Subsystem: Not Supported 00:13:40.671 Fixed Capacity Management: Not Supported 00:13:40.671 Variable Capacity Management: Not Supported 00:13:40.671 Delete Endurance Group: Not Supported 00:13:40.671 Delete NVM Set: Not Supported 00:13:40.671 Extended LBA Formats Supported: Supported 00:13:40.671 Flexible Data Placement Supported: Supported 00:13:40.671 00:13:40.671 Controller Memory Buffer Support 00:13:40.671 ================================ 00:13:40.671 Supported: No 00:13:40.671 00:13:40.671 Persistent Memory Region Support 00:13:40.671 ================================ 00:13:40.671 Supported: No 00:13:40.671 00:13:40.671 Admin Command Set Attributes 00:13:40.671 ============================ 00:13:40.671 Security Send/Receive: Not Supported 00:13:40.671 Format NVM: Supported 00:13:40.671 Firmware Activate/Download: Not Supported 00:13:40.671 Namespace Management: Supported 00:13:40.671 Device Self-Test: Not Supported 00:13:40.671 Directives: Supported 00:13:40.671 NVMe-MI: Not Supported 00:13:40.671 Virtualization Management: Not Supported 00:13:40.671 Doorbell Buffer Config: Supported 00:13:40.671 Get LBA Status Capability: Not Supported 00:13:40.671 Command & Feature Lockdown Capability: Not Supported 00:13:40.671 Abort Command Limit: 4 00:13:40.671 Async Event Request Limit: 4 00:13:40.671 Number of Firmware Slots: N/A 00:13:40.671 Firmware Slot 1 Read-Only: N/A 00:13:40.671 Firmware Activation Without Reset: N/A 00:13:40.671 Multiple Update Detection Support: N/A 00:13:40.671 Firmware Update Granularity: No Information Provided 00:13:40.671 Per-Namespace SMART Log: Yes 00:13:40.671 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.671 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:40.671 Command Effects Log Page: Supported 00:13:40.671 Get Log Page Extended Data: Supported 00:13:40.671 Telemetry Log Pages: Not Supported 00:13:40.671 Persistent Event Log Pages: Not Supported 00:13:40.671 Supported Log Pages Log Page: May Support 00:13:40.671 Commands Supported & Effects Log Page: Not Supported 00:13:40.671 Feature Identifiers & Effects Log Page:May Support 00:13:40.671 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.671 Data Area 4 for Telemetry Log: Not Supported 00:13:40.671 Error Log Page Entries Supported: 1 00:13:40.671 Keep Alive: Not Supported 00:13:40.671 00:13:40.671 NVM Command Set Attributes 00:13:40.671 ========================== 00:13:40.671 Submission Queue Entry Size 00:13:40.671 Max: 64 00:13:40.671 Min: 64 00:13:40.671 Completion Queue Entry Size 00:13:40.671 Max: 16 00:13:40.671 Min: 16 00:13:40.671 Number of Namespaces: 256 00:13:40.671 Compare Command: Supported 00:13:40.671 Write Uncorrectable Command: Not Supported 00:13:40.671 Dataset Management Command: Supported 00:13:40.671 Write Zeroes Command: Supported 00:13:40.671 Set Features Save Field: Supported 00:13:40.671 Reservations: Not Supported 00:13:40.671 Timestamp: Supported 00:13:40.671 Copy: Supported 00:13:40.671 Volatile Write Cache: Present 00:13:40.671 Atomic Write Unit (Normal): 1 00:13:40.671 Atomic Write Unit (PFail): 1 00:13:40.671 Atomic Compare & Write Unit: 1 00:13:40.671 Fused Compare & Write: Not Supported 00:13:40.671 Scatter-Gather List 00:13:40.671 SGL Command Set: Supported 00:13:40.671 SGL Keyed: Not Supported 00:13:40.671 SGL Bit Bucket Descriptor: Not Supported 00:13:40.671 SGL Metadata Pointer: Not Supported 00:13:40.671 Oversized SGL: Not Supported 00:13:40.671 SGL Metadata Address: Not Supported 00:13:40.671 SGL Offset: Not Supported 00:13:40.671 Transport SGL Data Block: Not Supported 00:13:40.671 Replay Protected Memory Block: Not Supported 00:13:40.671 00:13:40.671 Firmware Slot Information 00:13:40.671 ========================= 00:13:40.671 Active slot: 1 00:13:40.671 Slot 1 Firmware Revision: 1.0 00:13:40.671 00:13:40.671 00:13:40.671 Commands Supported and Effects 00:13:40.671 ============================== 00:13:40.671 Admin Commands 00:13:40.671 -------------- 00:13:40.671 Delete I/O Submission Queue (00h): Supported 00:13:40.671 Create I/O Submission Queue (01h): Supported 00:13:40.671 Get Log Page (02h): Supported 00:13:40.671 Delete I/O Completion Queue (04h): Supported 00:13:40.671 Create I/O Completion Queue (05h): Supported 00:13:40.671 Identify (06h): Supported 00:13:40.671 Abort (08h): Supported 00:13:40.671 Set Features (09h): Supported 00:13:40.671 Get Features (0Ah): Supported 00:13:40.671 Asynchronous Event Request (0Ch): Supported 00:13:40.671 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:40.671 Directive Send (19h): Supported 00:13:40.671 Directive Receive (1Ah): Supported 00:13:40.671 Virtualization Management (1Ch): Supported 00:13:40.671 Doorbell Buffer Config (7Ch): Supported 00:13:40.671 Format NVM (80h): Supported LBA-Change 00:13:40.671 I/O Commands 00:13:40.671 ------------ 00:13:40.671 Flush (00h): Supported LBA-Change 00:13:40.671 Write (01h): Supported LBA-Change 00:13:40.671 Read (02h): Supported 00:13:40.671 Compare (05h): Supported 00:13:40.671 Write Zeroes (08h): Supported LBA-Change 00:13:40.671 Dataset Management (09h): Supported LBA-Change 00:13:40.671 Unknown (0Ch): Supported 00:13:40.671 Unknown (12h): Supported 00:13:40.671 Copy (19h): Supported LBA-Change 00:13:40.671 Unknown (1Dh): Supported LBA-Change 00:13:40.671 00:13:40.671 Error Log 00:13:40.671 ========= 00:13:40.671 00:13:40.671 Arbitration 00:13:40.671 =========== 00:13:40.671 Arbitration Burst: no limit 00:13:40.671 00:13:40.671 Power Management 00:13:40.671 ================ 00:13:40.671 Number of Power States: 1 00:13:40.671 Current Power State: Power State #0 00:13:40.671 Power State #0: 00:13:40.671 Max Power: 25.00 W 00:13:40.671 Non-Operational State: Operational 00:13:40.671 Entry Latency: 16 microseconds 00:13:40.671 Exit Latency: 4 microseconds 00:13:40.671 Relative Read Throughput: 0 00:13:40.671 Relative Read Latency: 0 00:13:40.671 Relative Write Throughput: 0 00:13:40.671 Relative Write Latency: 0 00:13:40.671 Idle Power: Not Reported 00:13:40.671 Active Power: Not Reported 00:13:40.671 Non-Operational Permissive Mode: Not Supported 00:13:40.671 00:13:40.671 Health Information 00:13:40.671 ================== 00:13:40.671 Critical Warnings: 00:13:40.671 Available Spare Space: OK 00:13:40.671 Temperature: OK 00:13:40.671 Device Reliability: OK 00:13:40.671 Read Only: No 00:13:40.671 Volatile Memory Backup: OK 00:13:40.671 Current Temperature: 323 Kelvin (50 Celsius) 00:13:40.671 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:40.671 Available Spare: 0% 00:13:40.671 Available Spare Threshold: 0% 00:13:40.671 Life Percentage Used: 0% 00:13:40.671 Data Units Read: 784 00:13:40.671 Data Units Written: 713 00:13:40.671 Host Read Commands: 34945 00:13:40.671 Host Write Commands: 34368 00:13:40.671 Controller Busy Time: 0 minutes 00:13:40.671 Power Cycles: 0 00:13:40.671 Power On Hours: 0 hours 00:13:40.671 Unsafe Shutdowns: 0 00:13:40.671 Unrecoverable Media Errors: 0 00:13:40.671 Lifetime Error Log Entries: 0 00:13:40.671 Warning Temperature Time: 0 minutes 00:13:40.671 Critical Temperature Time: 0 minutes 00:13:40.671 00:13:40.671 Number of Queues 00:13:40.671 ================ 00:13:40.671 Number of I/O Submission Queues: 64 00:13:40.671 Number of I/O Completion Queues: 64 00:13:40.671 00:13:40.671 ZNS Specific Controller Data 00:13:40.671 ============================ 00:13:40.671 Zone Append Size Limit: 0 00:13:40.671 00:13:40.671 00:13:40.671 Active Namespaces 00:13:40.671 ================= 00:13:40.671 Namespace ID:1 00:13:40.671 Error Recovery Timeout: Unlimited 00:13:40.671 Command Set Identifier: NVM (00h) 00:13:40.671 Deallocate: Supported 00:13:40.671 Deallocated/Unwritten Error: Supported 00:13:40.671 Deallocated Read Value: All 0x00 00:13:40.672 Deallocate in Write Zeroes: Not Supported 00:13:40.672 Deallocated Guard Field: 0xFFFF 00:13:40.672 Flush: Supported 00:13:40.672 Reservation: Not Supported 00:13:40.672 Namespace Sharing Capabilities: Multiple Controllers 00:13:40.672 Size (in LBAs): 262144 (1GiB) 00:13:40.672 Capacity (in LBAs): 262144 (1GiB) 00:13:40.672 Utilization (in LBAs): 262144 (1GiB) 00:13:40.672 Thin Provisioning: Not Supported 00:13:40.672 Per-NS Atomic Units: No 00:13:40.672 Maximum Single Source Range Length: 128 00:13:40.672 Maximum Copy Length: 128 00:13:40.672 Maximum Source Range Count: 128 00:13:40.672 NGUID/EUI64 Never Reused: No 00:13:40.672 Namespace Write Protected: No 00:13:40.672 Endurance group ID: 1 00:13:40.672 Number of LBA Formats: 8 00:13:40.672 Current LBA Format: LBA Format #04 00:13:40.672 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.672 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.672 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.672 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.672 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.672 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.672 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.672 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.672 00:13:40.672 Get Feature FDP: 00:13:40.672 ================ 00:13:40.672 Enabled: Yes 00:13:40.672 FDP configuration index: 0 00:13:40.672 00:13:40.672 FDP configurations log page 00:13:40.672 =========================== 00:13:40.672 Number of FDP configurations: 1 00:13:40.672 Version: 0 00:13:40.672 Size: 112 00:13:40.672 FDP Configuration Descriptor: 0 00:13:40.672 Descriptor Size: 96 00:13:40.672 Reclaim Group Identifier format: 2 00:13:40.672 FDP Volatile Write Cache: Not Present 00:13:40.672 FDP Configuration: Valid 00:13:40.672 Vendor Specific Size: 0 00:13:40.672 Number of Reclaim Groups: 2 00:13:40.672 Number of Recalim Unit Handles: 8 00:13:40.672 Max Placement Identifiers: 128 00:13:40.672 Number of Namespaces Suppprted: 256 00:13:40.672 Reclaim unit Nominal Size: 6000000 bytes 00:13:40.672 Estimated Reclaim Unit Time Limit: Not Reported 00:13:40.672 RUH Desc #000: RUH Type: Initially Isolated 00:13:40.672 RUH Desc #001: RUH Type: Initially Isolated 00:13:40.672 RUH Desc #002: RUH Type: Initially Isolated 00:13:40.672 RUH Desc #003: RUH Type: Initially Isolated 00:13:40.672 RUH Desc #004: RUH Type: Initially Isolated 00:13:40.672 RUH Desc #005: RUH Type: Initially Isolated 00:13:40.672 RUH Desc #006: RUH Type: Initially Isolated 00:13:40.672 RUH Desc #007: RUH Type: Initially Isolated 00:13:40.672 00:13:40.672 FDP reclaim unit handle usage log page 00:13:40.672 ====================================== 00:13:40.672 Number of Reclaim Unit Handles: 8 00:13:40.672 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:40.672 RUH Usage Desc #001: RUH Attributes: Unused 00:13:40.672 RUH Usage Desc #002: RUH Attributes: Unused 00:13:40.672 RUH Usage Desc #003: RUH Attributes: Unused 00:13:40.672 RUH Usage Desc #004: RUH Attributes: Unused 00:13:40.672 RUH Usage Desc #005: RUH Attributes: Unused 00:13:40.672 RUH Usage Desc #006: RUH Attributes: Unused 00:13:40.672 RUH Usage Desc #007: RUH Attributes: Unused 00:13:40.672 00:13:40.672 FDP statistics log page 00:13:40.672 ======================= 00:13:40.672 Host bytes with metadata written: 441819136 00:13:40.672 Media bytes with metadata written: 441884672 00:13:40.672 Media bytes erased: 0 00:13:40.672 00:13:40.672 FDP events log page 00:13:40.672 =================== 00:13:40.672 Number of FDP events: 0 00:13:40.672 00:13:40.672 NVM Specific Namespace Data 00:13:40.672 =========================== 00:13:40.672 Logical Block Storage Tag Mask: 0 00:13:40.672 Protection Information Capabilities: 00:13:40.672 16b Guard Protection Information Storage Tag Support: No 00:13:40.672 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.672 Storage Tag Check Read Support: No 00:13:40.672 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.672 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.672 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.672 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.672 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.672 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.672 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.672 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.672 00:13:40.672 real 0m2.037s 00:13:40.672 user 0m0.795s 00:13:40.672 sys 0m1.033s 00:13:40.672 10:19:34 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.672 ************************************ 00:13:40.672 END TEST nvme_identify 00:13:40.672 10:19:34 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:13:40.672 ************************************ 00:13:40.672 10:19:34 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:13:40.672 10:19:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.672 10:19:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.672 10:19:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.672 ************************************ 00:13:40.672 START TEST nvme_perf 00:13:40.672 ************************************ 00:13:40.672 10:19:34 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:13:40.672 10:19:34 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:13:42.050 Initializing NVMe Controllers 00:13:42.050 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:42.050 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:42.050 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:42.050 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:42.050 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:42.050 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:42.050 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:42.050 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:42.050 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:42.050 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:42.050 Initialization complete. Launching workers. 00:13:42.050 ======================================================== 00:13:42.050 Latency(us) 00:13:42.050 Device Information : IOPS MiB/s Average min max 00:13:42.050 PCIE (0000:00:10.0) NSID 1 from core 0: 11298.84 132.41 11346.12 8228.92 52092.57 00:13:42.050 PCIE (0000:00:11.0) NSID 1 from core 0: 11298.84 132.41 11312.92 8405.12 48452.77 00:13:42.050 PCIE (0000:00:13.0) NSID 1 from core 0: 11298.84 132.41 11278.98 8418.00 45741.36 00:13:42.050 PCIE (0000:00:12.0) NSID 1 from core 0: 11298.84 132.41 11242.93 8463.95 42368.63 00:13:42.050 PCIE (0000:00:12.0) NSID 2 from core 0: 11298.84 132.41 11204.04 8467.29 38642.80 00:13:42.050 PCIE (0000:00:12.0) NSID 3 from core 0: 11298.84 132.41 11169.93 8412.32 35067.05 00:13:42.050 ======================================================== 00:13:42.050 Total : 67793.03 794.45 11259.15 8228.92 52092.57 00:13:42.050 00:13:42.050 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:42.050 ================================================================================= 00:13:42.050 1.00000% : 8638.836us 00:13:42.050 10.00000% : 9234.618us 00:13:42.050 25.00000% : 9830.400us 00:13:42.050 50.00000% : 10604.916us 00:13:42.050 75.00000% : 11856.058us 00:13:42.050 90.00000% : 13762.560us 00:13:42.050 95.00000% : 14477.498us 00:13:42.050 98.00000% : 15371.171us 00:13:42.050 99.00000% : 40989.789us 00:13:42.050 99.50000% : 49330.735us 00:13:42.050 99.90000% : 51713.862us 00:13:42.050 99.99000% : 52190.487us 00:13:42.050 99.99900% : 52190.487us 00:13:42.050 99.99990% : 52190.487us 00:13:42.050 99.99999% : 52190.487us 00:13:42.050 00:13:42.050 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:42.050 ================================================================================= 00:13:42.050 1.00000% : 8698.415us 00:13:42.050 10.00000% : 9294.196us 00:13:42.050 25.00000% : 9830.400us 00:13:42.050 50.00000% : 10604.916us 00:13:42.050 75.00000% : 11856.058us 00:13:42.050 90.00000% : 13822.138us 00:13:42.050 95.00000% : 14477.498us 00:13:42.050 98.00000% : 15490.327us 00:13:42.050 99.00000% : 38368.349us 00:13:42.050 99.50000% : 46232.669us 00:13:42.050 99.90000% : 48139.171us 00:13:42.050 99.99000% : 48615.796us 00:13:42.050 99.99900% : 48615.796us 00:13:42.050 99.99990% : 48615.796us 00:13:42.050 99.99999% : 48615.796us 00:13:42.050 00:13:42.050 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:42.050 ================================================================================= 00:13:42.050 1.00000% : 8757.993us 00:13:42.050 10.00000% : 9294.196us 00:13:42.050 25.00000% : 9830.400us 00:13:42.050 50.00000% : 10604.916us 00:13:42.050 75.00000% : 11796.480us 00:13:42.050 90.00000% : 13822.138us 00:13:42.050 95.00000% : 14417.920us 00:13:42.050 98.00000% : 15192.436us 00:13:42.050 99.00000% : 35508.596us 00:13:42.050 99.50000% : 43611.229us 00:13:42.050 99.90000% : 45517.731us 00:13:42.050 99.99000% : 45756.044us 00:13:42.050 99.99900% : 45756.044us 00:13:42.050 99.99990% : 45756.044us 00:13:42.050 99.99999% : 45756.044us 00:13:42.050 00:13:42.050 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:42.050 ================================================================================= 00:13:42.050 1.00000% : 8757.993us 00:13:42.050 10.00000% : 9294.196us 00:13:42.050 25.00000% : 9830.400us 00:13:42.050 50.00000% : 10604.916us 00:13:42.050 75.00000% : 11796.480us 00:13:42.050 90.00000% : 13762.560us 00:13:42.050 95.00000% : 14417.920us 00:13:42.050 98.00000% : 15192.436us 00:13:42.050 99.00000% : 32172.218us 00:13:42.050 99.50000% : 40036.538us 00:13:42.050 99.90000% : 41943.040us 00:13:42.050 99.99000% : 42419.665us 00:13:42.050 99.99900% : 42419.665us 00:13:42.050 99.99990% : 42419.665us 00:13:42.050 99.99999% : 42419.665us 00:13:42.050 00:13:42.050 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:42.050 ================================================================================= 00:13:42.050 1.00000% : 8757.993us 00:13:42.050 10.00000% : 9294.196us 00:13:42.050 25.00000% : 9830.400us 00:13:42.050 50.00000% : 10604.916us 00:13:42.050 75.00000% : 11856.058us 00:13:42.050 90.00000% : 13702.982us 00:13:42.050 95.00000% : 14417.920us 00:13:42.050 98.00000% : 15132.858us 00:13:42.050 99.00000% : 28597.527us 00:13:42.050 99.50000% : 36223.535us 00:13:42.050 99.90000% : 38368.349us 00:13:42.050 99.99000% : 38606.662us 00:13:42.050 99.99900% : 38844.975us 00:13:42.050 99.99990% : 38844.975us 00:13:42.050 99.99999% : 38844.975us 00:13:42.050 00:13:42.050 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:42.050 ================================================================================= 00:13:42.050 1.00000% : 8698.415us 00:13:42.050 10.00000% : 9294.196us 00:13:42.050 25.00000% : 9830.400us 00:13:42.050 50.00000% : 10545.338us 00:13:42.050 75.00000% : 11856.058us 00:13:42.050 90.00000% : 13702.982us 00:13:42.050 95.00000% : 14417.920us 00:13:42.050 98.00000% : 15252.015us 00:13:42.050 99.00000% : 25499.462us 00:13:42.050 99.50000% : 32887.156us 00:13:42.050 99.90000% : 34793.658us 00:13:42.050 99.99000% : 35031.971us 00:13:42.050 99.99900% : 35270.284us 00:13:42.050 99.99990% : 35270.284us 00:13:42.050 99.99999% : 35270.284us 00:13:42.050 00:13:42.050 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:42.050 ============================================================================== 00:13:42.050 Range in us Cumulative IO count 00:13:42.050 8221.789 - 8281.367: 0.0177% ( 2) 00:13:42.050 8281.367 - 8340.945: 0.0794% ( 7) 00:13:42.050 8340.945 - 8400.524: 0.2383% ( 18) 00:13:42.050 8400.524 - 8460.102: 0.3884% ( 17) 00:13:42.050 8460.102 - 8519.680: 0.6268% ( 27) 00:13:42.050 8519.680 - 8579.258: 0.8563% ( 26) 00:13:42.050 8579.258 - 8638.836: 1.2800% ( 48) 00:13:42.050 8638.836 - 8698.415: 1.6684% ( 44) 00:13:42.050 8698.415 - 8757.993: 2.1981% ( 60) 00:13:42.050 8757.993 - 8817.571: 2.8160% ( 70) 00:13:42.050 8817.571 - 8877.149: 3.6017% ( 89) 00:13:42.050 8877.149 - 8936.727: 4.4315% ( 94) 00:13:42.050 8936.727 - 8996.305: 5.3761% ( 107) 00:13:42.050 8996.305 - 9055.884: 6.5413% ( 132) 00:13:42.050 9055.884 - 9115.462: 7.6889% ( 130) 00:13:42.050 9115.462 - 9175.040: 8.8277% ( 129) 00:13:42.050 9175.040 - 9234.618: 10.1077% ( 145) 00:13:42.050 9234.618 - 9294.196: 11.4936% ( 157) 00:13:42.050 9294.196 - 9353.775: 12.8531% ( 154) 00:13:42.050 9353.775 - 9413.353: 14.2655% ( 160) 00:13:42.050 9413.353 - 9472.931: 15.8016% ( 174) 00:13:42.050 9472.931 - 9532.509: 17.3641% ( 177) 00:13:42.050 9532.509 - 9592.087: 19.1296% ( 200) 00:13:42.050 9592.087 - 9651.665: 20.7715% ( 186) 00:13:42.050 9651.665 - 9711.244: 22.6871% ( 217) 00:13:42.050 9711.244 - 9770.822: 24.6204% ( 219) 00:13:42.050 9770.822 - 9830.400: 26.6508% ( 230) 00:13:42.050 9830.400 - 9889.978: 28.6370% ( 225) 00:13:42.050 9889.978 - 9949.556: 30.4820% ( 209) 00:13:42.050 9949.556 - 10009.135: 32.7331% ( 255) 00:13:42.050 10009.135 - 10068.713: 34.6398% ( 216) 00:13:42.050 10068.713 - 10128.291: 36.6967% ( 233) 00:13:42.050 10128.291 - 10187.869: 38.5593% ( 211) 00:13:42.050 10187.869 - 10247.447: 40.5456% ( 225) 00:13:42.050 10247.447 - 10307.025: 42.6377% ( 237) 00:13:42.050 10307.025 - 10366.604: 44.5533% ( 217) 00:13:42.050 10366.604 - 10426.182: 46.4071% ( 210) 00:13:42.050 10426.182 - 10485.760: 48.1638% ( 199) 00:13:42.050 10485.760 - 10545.338: 49.9647% ( 204) 00:13:42.050 10545.338 - 10604.916: 51.6949% ( 196) 00:13:42.050 10604.916 - 10664.495: 53.3545% ( 188) 00:13:42.050 10664.495 - 10724.073: 55.0494% ( 192) 00:13:42.050 10724.073 - 10783.651: 56.7179% ( 189) 00:13:42.050 10783.651 - 10843.229: 58.2097% ( 169) 00:13:42.050 10843.229 - 10902.807: 59.8340% ( 184) 00:13:42.050 10902.807 - 10962.385: 61.2818% ( 164) 00:13:42.050 10962.385 - 11021.964: 62.6677% ( 157) 00:13:42.050 11021.964 - 11081.542: 64.0802% ( 160) 00:13:42.050 11081.542 - 11141.120: 65.2631% ( 134) 00:13:42.050 11141.120 - 11200.698: 66.3577% ( 124) 00:13:42.050 11200.698 - 11260.276: 67.4435% ( 123) 00:13:42.050 11260.276 - 11319.855: 68.4145% ( 110) 00:13:42.050 11319.855 - 11379.433: 69.3150% ( 102) 00:13:42.050 11379.433 - 11439.011: 70.1713% ( 97) 00:13:42.051 11439.011 - 11498.589: 71.0364% ( 98) 00:13:42.051 11498.589 - 11558.167: 71.7602% ( 82) 00:13:42.051 11558.167 - 11617.745: 72.5371% ( 88) 00:13:42.051 11617.745 - 11677.324: 73.1285% ( 67) 00:13:42.051 11677.324 - 11736.902: 73.8612% ( 83) 00:13:42.051 11736.902 - 11796.480: 74.6028% ( 84) 00:13:42.051 11796.480 - 11856.058: 75.1324% ( 60) 00:13:42.051 11856.058 - 11915.636: 75.6003% ( 53) 00:13:42.051 11915.636 - 11975.215: 76.1829% ( 66) 00:13:42.051 11975.215 - 12034.793: 76.7744% ( 67) 00:13:42.051 12034.793 - 12094.371: 77.2422% ( 53) 00:13:42.051 12094.371 - 12153.949: 77.8425% ( 68) 00:13:42.051 12153.949 - 12213.527: 78.3104% ( 53) 00:13:42.051 12213.527 - 12273.105: 78.8047% ( 56) 00:13:42.051 12273.105 - 12332.684: 79.2108% ( 46) 00:13:42.051 12332.684 - 12392.262: 79.7581% ( 62) 00:13:42.051 12392.262 - 12451.840: 80.2260% ( 53) 00:13:42.051 12451.840 - 12511.418: 80.7556% ( 60) 00:13:42.051 12511.418 - 12570.996: 81.2500% ( 56) 00:13:42.051 12570.996 - 12630.575: 81.7532% ( 57) 00:13:42.051 12630.575 - 12690.153: 82.2564% ( 57) 00:13:42.051 12690.153 - 12749.731: 82.7948% ( 61) 00:13:42.051 12749.731 - 12809.309: 83.3422% ( 62) 00:13:42.051 12809.309 - 12868.887: 83.8012% ( 52) 00:13:42.051 12868.887 - 12928.465: 84.2956% ( 56) 00:13:42.051 12928.465 - 12988.044: 84.7193% ( 48) 00:13:42.051 12988.044 - 13047.622: 85.1430% ( 48) 00:13:42.051 13047.622 - 13107.200: 85.6197% ( 54) 00:13:42.051 13107.200 - 13166.778: 86.0169% ( 45) 00:13:42.051 13166.778 - 13226.356: 86.4319% ( 47) 00:13:42.051 13226.356 - 13285.935: 86.7761% ( 39) 00:13:42.051 13285.935 - 13345.513: 87.2087% ( 49) 00:13:42.051 13345.513 - 13405.091: 87.6148% ( 46) 00:13:42.051 13405.091 - 13464.669: 88.0297% ( 47) 00:13:42.051 13464.669 - 13524.247: 88.4534% ( 48) 00:13:42.051 13524.247 - 13583.825: 88.8948% ( 50) 00:13:42.051 13583.825 - 13643.404: 89.2744% ( 43) 00:13:42.051 13643.404 - 13702.982: 89.6540% ( 43) 00:13:42.051 13702.982 - 13762.560: 90.1306% ( 54) 00:13:42.051 13762.560 - 13822.138: 90.5809% ( 51) 00:13:42.051 13822.138 - 13881.716: 91.0222% ( 50) 00:13:42.051 13881.716 - 13941.295: 91.4636% ( 50) 00:13:42.051 13941.295 - 14000.873: 91.9668% ( 57) 00:13:42.051 14000.873 - 14060.451: 92.4082% ( 50) 00:13:42.051 14060.451 - 14120.029: 92.8849% ( 54) 00:13:42.051 14120.029 - 14179.607: 93.3174% ( 49) 00:13:42.051 14179.607 - 14239.185: 93.6882% ( 42) 00:13:42.051 14239.185 - 14298.764: 94.0766% ( 44) 00:13:42.051 14298.764 - 14358.342: 94.4474% ( 42) 00:13:42.051 14358.342 - 14417.920: 94.8270% ( 43) 00:13:42.051 14417.920 - 14477.498: 95.1624% ( 38) 00:13:42.051 14477.498 - 14537.076: 95.5244% ( 41) 00:13:42.051 14537.076 - 14596.655: 95.8157% ( 33) 00:13:42.051 14596.655 - 14656.233: 96.1070% ( 33) 00:13:42.051 14656.233 - 14715.811: 96.3895% ( 32) 00:13:42.051 14715.811 - 14775.389: 96.6367% ( 28) 00:13:42.051 14775.389 - 14834.967: 96.8573% ( 25) 00:13:42.051 14834.967 - 14894.545: 97.0869% ( 26) 00:13:42.051 14894.545 - 14954.124: 97.2193% ( 15) 00:13:42.051 14954.124 - 15013.702: 97.4047% ( 21) 00:13:42.051 15013.702 - 15073.280: 97.6077% ( 23) 00:13:42.051 15073.280 - 15132.858: 97.7489% ( 16) 00:13:42.051 15132.858 - 15192.436: 97.8284% ( 9) 00:13:42.051 15192.436 - 15252.015: 97.9167% ( 10) 00:13:42.051 15252.015 - 15371.171: 98.1109% ( 22) 00:13:42.051 15371.171 - 15490.327: 98.2345% ( 14) 00:13:42.051 15490.327 - 15609.484: 98.3669% ( 15) 00:13:42.051 15609.484 - 15728.640: 98.4905% ( 14) 00:13:42.051 15728.640 - 15847.796: 98.6141% ( 14) 00:13:42.051 15847.796 - 15966.953: 98.6670% ( 6) 00:13:42.051 15966.953 - 16086.109: 98.7200% ( 6) 00:13:42.051 16086.109 - 16205.265: 98.7730% ( 6) 00:13:42.051 16205.265 - 16324.422: 98.8171% ( 5) 00:13:42.051 16324.422 - 16443.578: 98.8612% ( 5) 00:13:42.051 16443.578 - 16562.735: 98.8701% ( 1) 00:13:42.051 40036.538 - 40274.851: 98.8789% ( 1) 00:13:42.051 40274.851 - 40513.164: 98.9142% ( 4) 00:13:42.051 40513.164 - 40751.476: 98.9583% ( 5) 00:13:42.051 40751.476 - 40989.789: 99.0025% ( 5) 00:13:42.051 40989.789 - 41228.102: 99.0466% ( 5) 00:13:42.051 41228.102 - 41466.415: 99.0819% ( 4) 00:13:42.051 41466.415 - 41704.727: 99.1261% ( 5) 00:13:42.051 41704.727 - 41943.040: 99.1702% ( 5) 00:13:42.051 41943.040 - 42181.353: 99.2055% ( 4) 00:13:42.051 42181.353 - 42419.665: 99.2496% ( 5) 00:13:42.051 42419.665 - 42657.978: 99.2938% ( 5) 00:13:42.051 42657.978 - 42896.291: 99.3379% ( 5) 00:13:42.051 42896.291 - 43134.604: 99.3732% ( 4) 00:13:42.051 43134.604 - 43372.916: 99.4262% ( 6) 00:13:42.051 43372.916 - 43611.229: 99.4350% ( 1) 00:13:42.051 48854.109 - 49092.422: 99.4527% ( 2) 00:13:42.051 49092.422 - 49330.735: 99.5056% ( 6) 00:13:42.051 49330.735 - 49569.047: 99.5498% ( 5) 00:13:42.051 49569.047 - 49807.360: 99.5939% ( 5) 00:13:42.051 49807.360 - 50045.673: 99.6204% ( 3) 00:13:42.051 50045.673 - 50283.985: 99.6645% ( 5) 00:13:42.051 50283.985 - 50522.298: 99.6999% ( 4) 00:13:42.051 50522.298 - 50760.611: 99.7528% ( 6) 00:13:42.051 50760.611 - 50998.924: 99.7970% ( 5) 00:13:42.051 50998.924 - 51237.236: 99.8499% ( 6) 00:13:42.051 51237.236 - 51475.549: 99.8941% ( 5) 00:13:42.051 51475.549 - 51713.862: 99.9382% ( 5) 00:13:42.051 51713.862 - 51952.175: 99.9823% ( 5) 00:13:42.051 51952.175 - 52190.487: 100.0000% ( 2) 00:13:42.051 00:13:42.051 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:42.051 ============================================================================== 00:13:42.051 Range in us Cumulative IO count 00:13:42.051 8400.524 - 8460.102: 0.1148% ( 13) 00:13:42.051 8460.102 - 8519.680: 0.3001% ( 21) 00:13:42.051 8519.680 - 8579.258: 0.4590% ( 18) 00:13:42.051 8579.258 - 8638.836: 0.6797% ( 25) 00:13:42.051 8638.836 - 8698.415: 1.0417% ( 41) 00:13:42.051 8698.415 - 8757.993: 1.3771% ( 38) 00:13:42.051 8757.993 - 8817.571: 1.8538% ( 54) 00:13:42.051 8817.571 - 8877.149: 2.5865% ( 83) 00:13:42.051 8877.149 - 8936.727: 3.3987% ( 92) 00:13:42.051 8936.727 - 8996.305: 4.4315% ( 117) 00:13:42.051 8996.305 - 9055.884: 5.4555% ( 116) 00:13:42.051 9055.884 - 9115.462: 6.6737% ( 138) 00:13:42.051 9115.462 - 9175.040: 8.0862% ( 160) 00:13:42.051 9175.040 - 9234.618: 9.5516% ( 166) 00:13:42.051 9234.618 - 9294.196: 11.0081% ( 165) 00:13:42.051 9294.196 - 9353.775: 12.6589% ( 187) 00:13:42.051 9353.775 - 9413.353: 14.2126% ( 176) 00:13:42.051 9413.353 - 9472.931: 15.8545% ( 186) 00:13:42.051 9472.931 - 9532.509: 17.4523% ( 181) 00:13:42.051 9532.509 - 9592.087: 19.2002% ( 198) 00:13:42.051 9592.087 - 9651.665: 21.0629% ( 211) 00:13:42.051 9651.665 - 9711.244: 22.7754% ( 194) 00:13:42.051 9711.244 - 9770.822: 24.6116% ( 208) 00:13:42.051 9770.822 - 9830.400: 26.5802% ( 223) 00:13:42.051 9830.400 - 9889.978: 28.5311% ( 221) 00:13:42.051 9889.978 - 9949.556: 30.6409% ( 239) 00:13:42.051 9949.556 - 10009.135: 32.5918% ( 221) 00:13:42.051 10009.135 - 10068.713: 34.5780% ( 225) 00:13:42.051 10068.713 - 10128.291: 36.5378% ( 222) 00:13:42.051 10128.291 - 10187.869: 38.5593% ( 229) 00:13:42.051 10187.869 - 10247.447: 40.5367% ( 224) 00:13:42.051 10247.447 - 10307.025: 42.3023% ( 200) 00:13:42.051 10307.025 - 10366.604: 44.1384% ( 208) 00:13:42.051 10366.604 - 10426.182: 45.9040% ( 200) 00:13:42.051 10426.182 - 10485.760: 47.7843% ( 213) 00:13:42.051 10485.760 - 10545.338: 49.6822% ( 215) 00:13:42.051 10545.338 - 10604.916: 51.5890% ( 216) 00:13:42.051 10604.916 - 10664.495: 53.4428% ( 210) 00:13:42.051 10664.495 - 10724.073: 55.3319% ( 214) 00:13:42.051 10724.073 - 10783.651: 57.1151% ( 202) 00:13:42.051 10783.651 - 10843.229: 58.7659% ( 187) 00:13:42.051 10843.229 - 10902.807: 60.3460% ( 179) 00:13:42.051 10902.807 - 10962.385: 61.9527% ( 182) 00:13:42.051 10962.385 - 11021.964: 63.4357% ( 168) 00:13:42.051 11021.964 - 11081.542: 64.7334% ( 147) 00:13:42.051 11081.542 - 11141.120: 65.9251% ( 135) 00:13:42.051 11141.120 - 11200.698: 66.9668% ( 118) 00:13:42.051 11200.698 - 11260.276: 68.0261% ( 120) 00:13:42.051 11260.276 - 11319.855: 68.9001% ( 99) 00:13:42.051 11319.855 - 11379.433: 69.7740% ( 99) 00:13:42.051 11379.433 - 11439.011: 70.6568% ( 100) 00:13:42.051 11439.011 - 11498.589: 71.4866% ( 94) 00:13:42.051 11498.589 - 11558.167: 72.2634% ( 88) 00:13:42.051 11558.167 - 11617.745: 72.9343% ( 76) 00:13:42.051 11617.745 - 11677.324: 73.5258% ( 67) 00:13:42.051 11677.324 - 11736.902: 74.1172% ( 67) 00:13:42.051 11736.902 - 11796.480: 74.7263% ( 69) 00:13:42.051 11796.480 - 11856.058: 75.3266% ( 68) 00:13:42.051 11856.058 - 11915.636: 75.9710% ( 73) 00:13:42.051 11915.636 - 11975.215: 76.5537% ( 66) 00:13:42.051 11975.215 - 12034.793: 77.1010% ( 62) 00:13:42.051 12034.793 - 12094.371: 77.6130% ( 58) 00:13:42.051 12094.371 - 12153.949: 78.1162% ( 57) 00:13:42.051 12153.949 - 12213.527: 78.6105% ( 56) 00:13:42.051 12213.527 - 12273.105: 79.1314% ( 59) 00:13:42.051 12273.105 - 12332.684: 79.6345% ( 57) 00:13:42.051 12332.684 - 12392.262: 80.1819% ( 62) 00:13:42.051 12392.262 - 12451.840: 80.7380% ( 63) 00:13:42.051 12451.840 - 12511.418: 81.2323% ( 56) 00:13:42.051 12511.418 - 12570.996: 81.7973% ( 64) 00:13:42.051 12570.996 - 12630.575: 82.3181% ( 59) 00:13:42.051 12630.575 - 12690.153: 82.7772% ( 52) 00:13:42.051 12690.153 - 12749.731: 83.2274% ( 51) 00:13:42.051 12749.731 - 12809.309: 83.6776% ( 51) 00:13:42.051 12809.309 - 12868.887: 84.0660% ( 44) 00:13:42.051 12868.887 - 12928.465: 84.4721% ( 46) 00:13:42.051 12928.465 - 12988.044: 84.9047% ( 49) 00:13:42.051 12988.044 - 13047.622: 85.3460% ( 50) 00:13:42.051 13047.622 - 13107.200: 85.7080% ( 41) 00:13:42.051 13107.200 - 13166.778: 86.0434% ( 38) 00:13:42.051 13166.778 - 13226.356: 86.3347% ( 33) 00:13:42.051 13226.356 - 13285.935: 86.6879% ( 40) 00:13:42.052 13285.935 - 13345.513: 87.0586% ( 42) 00:13:42.052 13345.513 - 13405.091: 87.4029% ( 39) 00:13:42.052 13405.091 - 13464.669: 87.8708% ( 53) 00:13:42.052 13464.669 - 13524.247: 88.2680% ( 45) 00:13:42.052 13524.247 - 13583.825: 88.6653% ( 45) 00:13:42.052 13583.825 - 13643.404: 89.0095% ( 39) 00:13:42.052 13643.404 - 13702.982: 89.3362% ( 37) 00:13:42.052 13702.982 - 13762.560: 89.7511% ( 47) 00:13:42.052 13762.560 - 13822.138: 90.1836% ( 49) 00:13:42.052 13822.138 - 13881.716: 90.6603% ( 54) 00:13:42.052 13881.716 - 13941.295: 91.1547% ( 56) 00:13:42.052 13941.295 - 14000.873: 91.6314% ( 54) 00:13:42.052 14000.873 - 14060.451: 92.1434% ( 58) 00:13:42.052 14060.451 - 14120.029: 92.6024% ( 52) 00:13:42.052 14120.029 - 14179.607: 93.0350% ( 49) 00:13:42.052 14179.607 - 14239.185: 93.4852% ( 51) 00:13:42.052 14239.185 - 14298.764: 93.9442% ( 52) 00:13:42.052 14298.764 - 14358.342: 94.4297% ( 55) 00:13:42.052 14358.342 - 14417.920: 94.8711% ( 50) 00:13:42.052 14417.920 - 14477.498: 95.2154% ( 39) 00:13:42.052 14477.498 - 14537.076: 95.5244% ( 35) 00:13:42.052 14537.076 - 14596.655: 95.8333% ( 35) 00:13:42.052 14596.655 - 14656.233: 96.1776% ( 39) 00:13:42.052 14656.233 - 14715.811: 96.4866% ( 35) 00:13:42.052 14715.811 - 14775.389: 96.7867% ( 34) 00:13:42.052 14775.389 - 14834.967: 97.0162% ( 26) 00:13:42.052 14834.967 - 14894.545: 97.2105% ( 22) 00:13:42.052 14894.545 - 14954.124: 97.3782% ( 19) 00:13:42.052 14954.124 - 15013.702: 97.4753% ( 11) 00:13:42.052 15013.702 - 15073.280: 97.5459% ( 8) 00:13:42.052 15073.280 - 15132.858: 97.6342% ( 10) 00:13:42.052 15132.858 - 15192.436: 97.7048% ( 8) 00:13:42.052 15192.436 - 15252.015: 97.7666% ( 7) 00:13:42.052 15252.015 - 15371.171: 97.9078% ( 16) 00:13:42.052 15371.171 - 15490.327: 98.0226% ( 13) 00:13:42.052 15490.327 - 15609.484: 98.1992% ( 20) 00:13:42.052 15609.484 - 15728.640: 98.3051% ( 12) 00:13:42.052 15728.640 - 15847.796: 98.4110% ( 12) 00:13:42.052 15847.796 - 15966.953: 98.5169% ( 12) 00:13:42.052 15966.953 - 16086.109: 98.6141% ( 11) 00:13:42.052 16086.109 - 16205.265: 98.7112% ( 11) 00:13:42.052 16205.265 - 16324.422: 98.7730% ( 7) 00:13:42.052 16324.422 - 16443.578: 98.8171% ( 5) 00:13:42.052 16443.578 - 16562.735: 98.8612% ( 5) 00:13:42.052 16562.735 - 16681.891: 98.8701% ( 1) 00:13:42.052 37415.098 - 37653.411: 98.8877% ( 2) 00:13:42.052 37653.411 - 37891.724: 98.9319% ( 5) 00:13:42.052 37891.724 - 38130.036: 98.9760% ( 5) 00:13:42.052 38130.036 - 38368.349: 99.0201% ( 5) 00:13:42.052 38368.349 - 38606.662: 99.0643% ( 5) 00:13:42.052 38606.662 - 38844.975: 99.1084% ( 5) 00:13:42.052 38844.975 - 39083.287: 99.1525% ( 5) 00:13:42.052 39083.287 - 39321.600: 99.2055% ( 6) 00:13:42.052 39321.600 - 39559.913: 99.2496% ( 5) 00:13:42.052 39559.913 - 39798.225: 99.2938% ( 5) 00:13:42.052 39798.225 - 40036.538: 99.3379% ( 5) 00:13:42.052 40036.538 - 40274.851: 99.3909% ( 6) 00:13:42.052 40274.851 - 40513.164: 99.4350% ( 5) 00:13:42.052 45756.044 - 45994.356: 99.4880% ( 6) 00:13:42.052 45994.356 - 46232.669: 99.5321% ( 5) 00:13:42.052 46232.669 - 46470.982: 99.5763% ( 5) 00:13:42.052 46470.982 - 46709.295: 99.6292% ( 6) 00:13:42.052 46709.295 - 46947.607: 99.6822% ( 6) 00:13:42.052 46947.607 - 47185.920: 99.7352% ( 6) 00:13:42.052 47185.920 - 47424.233: 99.7793% ( 5) 00:13:42.052 47424.233 - 47662.545: 99.8323% ( 6) 00:13:42.052 47662.545 - 47900.858: 99.8764% ( 5) 00:13:42.052 47900.858 - 48139.171: 99.9294% ( 6) 00:13:42.052 48139.171 - 48377.484: 99.9823% ( 6) 00:13:42.052 48377.484 - 48615.796: 100.0000% ( 2) 00:13:42.052 00:13:42.052 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:42.052 ============================================================================== 00:13:42.052 Range in us Cumulative IO count 00:13:42.052 8400.524 - 8460.102: 0.0706% ( 8) 00:13:42.052 8460.102 - 8519.680: 0.1324% ( 7) 00:13:42.052 8519.680 - 8579.258: 0.2913% ( 18) 00:13:42.052 8579.258 - 8638.836: 0.5208% ( 26) 00:13:42.052 8638.836 - 8698.415: 0.8563% ( 38) 00:13:42.052 8698.415 - 8757.993: 1.3330% ( 54) 00:13:42.052 8757.993 - 8817.571: 1.8273% ( 56) 00:13:42.052 8817.571 - 8877.149: 2.5335% ( 80) 00:13:42.052 8877.149 - 8936.727: 3.5576% ( 116) 00:13:42.052 8936.727 - 8996.305: 4.4933% ( 106) 00:13:42.052 8996.305 - 9055.884: 5.5614% ( 121) 00:13:42.052 9055.884 - 9115.462: 6.7797% ( 138) 00:13:42.052 9115.462 - 9175.040: 8.1215% ( 152) 00:13:42.052 9175.040 - 9234.618: 9.5427% ( 161) 00:13:42.052 9234.618 - 9294.196: 11.0611% ( 172) 00:13:42.052 9294.196 - 9353.775: 12.5441% ( 168) 00:13:42.052 9353.775 - 9413.353: 14.1861% ( 186) 00:13:42.052 9413.353 - 9472.931: 15.9251% ( 197) 00:13:42.052 9472.931 - 9532.509: 17.6377% ( 194) 00:13:42.052 9532.509 - 9592.087: 19.3768% ( 197) 00:13:42.052 9592.087 - 9651.665: 21.1423% ( 200) 00:13:42.052 9651.665 - 9711.244: 22.8460% ( 193) 00:13:42.052 9711.244 - 9770.822: 24.5410% ( 192) 00:13:42.052 9770.822 - 9830.400: 26.2977% ( 199) 00:13:42.052 9830.400 - 9889.978: 28.1338% ( 208) 00:13:42.052 9889.978 - 9949.556: 29.9347% ( 204) 00:13:42.052 9949.556 - 10009.135: 31.8768% ( 220) 00:13:42.052 10009.135 - 10068.713: 33.8806% ( 227) 00:13:42.052 10068.713 - 10128.291: 35.8404% ( 222) 00:13:42.052 10128.291 - 10187.869: 37.7737% ( 219) 00:13:42.052 10187.869 - 10247.447: 39.6540% ( 213) 00:13:42.052 10247.447 - 10307.025: 41.5078% ( 210) 00:13:42.052 10307.025 - 10366.604: 43.4763% ( 223) 00:13:42.052 10366.604 - 10426.182: 45.2772% ( 204) 00:13:42.052 10426.182 - 10485.760: 47.1398% ( 211) 00:13:42.052 10485.760 - 10545.338: 49.0113% ( 212) 00:13:42.052 10545.338 - 10604.916: 50.9004% ( 214) 00:13:42.052 10604.916 - 10664.495: 52.8425% ( 220) 00:13:42.052 10664.495 - 10724.073: 54.8199% ( 224) 00:13:42.052 10724.073 - 10783.651: 56.7355% ( 217) 00:13:42.052 10783.651 - 10843.229: 58.4569% ( 195) 00:13:42.052 10843.229 - 10902.807: 60.1783% ( 195) 00:13:42.052 10902.807 - 10962.385: 61.7408% ( 177) 00:13:42.052 10962.385 - 11021.964: 63.1886% ( 164) 00:13:42.052 11021.964 - 11081.542: 64.4862% ( 147) 00:13:42.052 11081.542 - 11141.120: 65.7044% ( 138) 00:13:42.052 11141.120 - 11200.698: 66.7991% ( 124) 00:13:42.052 11200.698 - 11260.276: 67.9025% ( 125) 00:13:42.052 11260.276 - 11319.855: 68.9089% ( 114) 00:13:42.052 11319.855 - 11379.433: 69.8446% ( 106) 00:13:42.052 11379.433 - 11439.011: 70.7892% ( 107) 00:13:42.052 11439.011 - 11498.589: 71.6278% ( 95) 00:13:42.052 11498.589 - 11558.167: 72.3782% ( 85) 00:13:42.052 11558.167 - 11617.745: 73.1197% ( 84) 00:13:42.052 11617.745 - 11677.324: 73.9230% ( 91) 00:13:42.052 11677.324 - 11736.902: 74.7617% ( 95) 00:13:42.052 11736.902 - 11796.480: 75.4326% ( 76) 00:13:42.052 11796.480 - 11856.058: 75.9887% ( 63) 00:13:42.052 11856.058 - 11915.636: 76.5537% ( 64) 00:13:42.052 11915.636 - 11975.215: 77.1098% ( 63) 00:13:42.052 11975.215 - 12034.793: 77.6836% ( 65) 00:13:42.052 12034.793 - 12094.371: 78.2751% ( 67) 00:13:42.052 12094.371 - 12153.949: 78.7606% ( 55) 00:13:42.052 12153.949 - 12213.527: 79.2991% ( 61) 00:13:42.052 12213.527 - 12273.105: 79.7846% ( 55) 00:13:42.052 12273.105 - 12332.684: 80.2260% ( 50) 00:13:42.052 12332.684 - 12392.262: 80.6674% ( 50) 00:13:42.052 12392.262 - 12451.840: 81.2412% ( 65) 00:13:42.052 12451.840 - 12511.418: 81.7620% ( 59) 00:13:42.052 12511.418 - 12570.996: 82.3181% ( 63) 00:13:42.052 12570.996 - 12630.575: 82.7860% ( 53) 00:13:42.052 12630.575 - 12690.153: 83.2009% ( 47) 00:13:42.052 12690.153 - 12749.731: 83.6158% ( 47) 00:13:42.052 12749.731 - 12809.309: 83.9866% ( 42) 00:13:42.052 12809.309 - 12868.887: 84.3485% ( 41) 00:13:42.052 12868.887 - 12928.465: 84.7722% ( 48) 00:13:42.052 12928.465 - 12988.044: 85.1960% ( 48) 00:13:42.052 12988.044 - 13047.622: 85.5667% ( 42) 00:13:42.052 13047.622 - 13107.200: 85.9463% ( 43) 00:13:42.052 13107.200 - 13166.778: 86.2641% ( 36) 00:13:42.052 13166.778 - 13226.356: 86.5466% ( 32) 00:13:42.052 13226.356 - 13285.935: 86.8468% ( 34) 00:13:42.052 13285.935 - 13345.513: 87.2087% ( 41) 00:13:42.052 13345.513 - 13405.091: 87.6148% ( 46) 00:13:42.052 13405.091 - 13464.669: 87.9855% ( 42) 00:13:42.052 13464.669 - 13524.247: 88.3563% ( 42) 00:13:42.052 13524.247 - 13583.825: 88.7094% ( 40) 00:13:42.052 13583.825 - 13643.404: 89.0537% ( 39) 00:13:42.052 13643.404 - 13702.982: 89.4333% ( 43) 00:13:42.052 13702.982 - 13762.560: 89.8746% ( 50) 00:13:42.052 13762.560 - 13822.138: 90.3249% ( 51) 00:13:42.052 13822.138 - 13881.716: 90.7927% ( 53) 00:13:42.052 13881.716 - 13941.295: 91.3400% ( 62) 00:13:42.052 13941.295 - 14000.873: 91.8167% ( 54) 00:13:42.052 14000.873 - 14060.451: 92.3464% ( 60) 00:13:42.052 14060.451 - 14120.029: 92.8407% ( 56) 00:13:42.052 14120.029 - 14179.607: 93.3439% ( 57) 00:13:42.052 14179.607 - 14239.185: 93.8030% ( 52) 00:13:42.052 14239.185 - 14298.764: 94.2267% ( 48) 00:13:42.052 14298.764 - 14358.342: 94.6504% ( 48) 00:13:42.052 14358.342 - 14417.920: 95.0212% ( 42) 00:13:42.052 14417.920 - 14477.498: 95.4273% ( 46) 00:13:42.052 14477.498 - 14537.076: 95.7451% ( 36) 00:13:42.052 14537.076 - 14596.655: 96.0893% ( 39) 00:13:42.052 14596.655 - 14656.233: 96.4160% ( 37) 00:13:42.052 14656.233 - 14715.811: 96.7073% ( 33) 00:13:42.052 14715.811 - 14775.389: 96.9898% ( 32) 00:13:42.052 14775.389 - 14834.967: 97.2193% ( 26) 00:13:42.052 14834.967 - 14894.545: 97.3958% ( 20) 00:13:42.052 14894.545 - 14954.124: 97.5371% ( 16) 00:13:42.052 14954.124 - 15013.702: 97.6783% ( 16) 00:13:42.052 15013.702 - 15073.280: 97.7843% ( 12) 00:13:42.052 15073.280 - 15132.858: 97.8990% ( 13) 00:13:42.052 15132.858 - 15192.436: 98.0138% ( 13) 00:13:42.052 15192.436 - 15252.015: 98.1285% ( 13) 00:13:42.053 15252.015 - 15371.171: 98.2963% ( 19) 00:13:42.053 15371.171 - 15490.327: 98.3051% ( 1) 00:13:42.053 15847.796 - 15966.953: 98.3139% ( 1) 00:13:42.053 15966.953 - 16086.109: 98.3845% ( 8) 00:13:42.053 16086.109 - 16205.265: 98.4552% ( 8) 00:13:42.053 16205.265 - 16324.422: 98.4993% ( 5) 00:13:42.053 16324.422 - 16443.578: 98.5699% ( 8) 00:13:42.053 16443.578 - 16562.735: 98.6141% ( 5) 00:13:42.053 16562.735 - 16681.891: 98.6847% ( 8) 00:13:42.053 16681.891 - 16801.047: 98.7376% ( 6) 00:13:42.053 16801.047 - 16920.204: 98.8083% ( 8) 00:13:42.053 16920.204 - 17039.360: 98.8701% ( 7) 00:13:42.053 34793.658 - 35031.971: 98.9142% ( 5) 00:13:42.053 35031.971 - 35270.284: 98.9495% ( 4) 00:13:42.053 35270.284 - 35508.596: 99.0025% ( 6) 00:13:42.053 35508.596 - 35746.909: 99.0466% ( 5) 00:13:42.053 35746.909 - 35985.222: 99.0907% ( 5) 00:13:42.053 35985.222 - 36223.535: 99.1437% ( 6) 00:13:42.053 36223.535 - 36461.847: 99.1790% ( 4) 00:13:42.053 36461.847 - 36700.160: 99.2320% ( 6) 00:13:42.053 36700.160 - 36938.473: 99.2761% ( 5) 00:13:42.053 36938.473 - 37176.785: 99.3203% ( 5) 00:13:42.053 37176.785 - 37415.098: 99.3644% ( 5) 00:13:42.053 37415.098 - 37653.411: 99.3997% ( 4) 00:13:42.053 37653.411 - 37891.724: 99.4350% ( 4) 00:13:42.053 42896.291 - 43134.604: 99.4527% ( 2) 00:13:42.053 43134.604 - 43372.916: 99.4968% ( 5) 00:13:42.053 43372.916 - 43611.229: 99.5498% ( 6) 00:13:42.053 43611.229 - 43849.542: 99.5939% ( 5) 00:13:42.053 43849.542 - 44087.855: 99.6469% ( 6) 00:13:42.053 44087.855 - 44326.167: 99.6999% ( 6) 00:13:42.053 44326.167 - 44564.480: 99.7528% ( 6) 00:13:42.053 44564.480 - 44802.793: 99.7970% ( 5) 00:13:42.053 44802.793 - 45041.105: 99.8411% ( 5) 00:13:42.053 45041.105 - 45279.418: 99.8941% ( 6) 00:13:42.053 45279.418 - 45517.731: 99.9470% ( 6) 00:13:42.053 45517.731 - 45756.044: 100.0000% ( 6) 00:13:42.053 00:13:42.053 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:42.053 ============================================================================== 00:13:42.053 Range in us Cumulative IO count 00:13:42.053 8460.102 - 8519.680: 0.0706% ( 8) 00:13:42.053 8519.680 - 8579.258: 0.2030% ( 15) 00:13:42.053 8579.258 - 8638.836: 0.4767% ( 31) 00:13:42.053 8638.836 - 8698.415: 0.8298% ( 40) 00:13:42.053 8698.415 - 8757.993: 1.2800% ( 51) 00:13:42.053 8757.993 - 8817.571: 1.8538% ( 65) 00:13:42.053 8817.571 - 8877.149: 2.5865% ( 83) 00:13:42.053 8877.149 - 8936.727: 3.2574% ( 76) 00:13:42.053 8936.727 - 8996.305: 4.1314% ( 99) 00:13:42.053 8996.305 - 9055.884: 5.2613% ( 128) 00:13:42.053 9055.884 - 9115.462: 6.4619% ( 136) 00:13:42.053 9115.462 - 9175.040: 7.8037% ( 152) 00:13:42.053 9175.040 - 9234.618: 9.0660% ( 143) 00:13:42.053 9234.618 - 9294.196: 10.4873% ( 161) 00:13:42.053 9294.196 - 9353.775: 12.1822% ( 192) 00:13:42.053 9353.775 - 9413.353: 13.8859% ( 193) 00:13:42.053 9413.353 - 9472.931: 15.6515% ( 200) 00:13:42.053 9472.931 - 9532.509: 17.3464% ( 192) 00:13:42.053 9532.509 - 9592.087: 19.0590% ( 194) 00:13:42.053 9592.087 - 9651.665: 20.7362% ( 190) 00:13:42.053 9651.665 - 9711.244: 22.4929% ( 199) 00:13:42.053 9711.244 - 9770.822: 24.3644% ( 212) 00:13:42.053 9770.822 - 9830.400: 26.2800% ( 217) 00:13:42.053 9830.400 - 9889.978: 28.2927% ( 228) 00:13:42.053 9889.978 - 9949.556: 30.2260% ( 219) 00:13:42.053 9949.556 - 10009.135: 32.2917% ( 234) 00:13:42.053 10009.135 - 10068.713: 34.2073% ( 217) 00:13:42.053 10068.713 - 10128.291: 35.9905% ( 202) 00:13:42.053 10128.291 - 10187.869: 38.0297% ( 231) 00:13:42.053 10187.869 - 10247.447: 39.9718% ( 220) 00:13:42.053 10247.447 - 10307.025: 41.9138% ( 220) 00:13:42.053 10307.025 - 10366.604: 43.8824% ( 223) 00:13:42.053 10366.604 - 10426.182: 45.6833% ( 204) 00:13:42.053 10426.182 - 10485.760: 47.4400% ( 199) 00:13:42.053 10485.760 - 10545.338: 49.3909% ( 221) 00:13:42.053 10545.338 - 10604.916: 51.2094% ( 206) 00:13:42.053 10604.916 - 10664.495: 52.9838% ( 201) 00:13:42.053 10664.495 - 10724.073: 54.7405% ( 199) 00:13:42.053 10724.073 - 10783.651: 56.3559% ( 183) 00:13:42.053 10783.651 - 10843.229: 58.0244% ( 189) 00:13:42.053 10843.229 - 10902.807: 59.6575% ( 185) 00:13:42.053 10902.807 - 10962.385: 61.2553% ( 181) 00:13:42.053 10962.385 - 11021.964: 62.8090% ( 176) 00:13:42.053 11021.964 - 11081.542: 64.2479% ( 163) 00:13:42.053 11081.542 - 11141.120: 65.5367% ( 146) 00:13:42.053 11141.120 - 11200.698: 66.7549% ( 138) 00:13:42.053 11200.698 - 11260.276: 67.7966% ( 118) 00:13:42.053 11260.276 - 11319.855: 68.8383% ( 118) 00:13:42.053 11319.855 - 11379.433: 69.8270% ( 112) 00:13:42.053 11379.433 - 11439.011: 70.7804% ( 108) 00:13:42.053 11439.011 - 11498.589: 71.6808% ( 102) 00:13:42.053 11498.589 - 11558.167: 72.5459% ( 98) 00:13:42.053 11558.167 - 11617.745: 73.3404% ( 90) 00:13:42.053 11617.745 - 11677.324: 74.1349% ( 90) 00:13:42.053 11677.324 - 11736.902: 74.9647% ( 94) 00:13:42.053 11736.902 - 11796.480: 75.6179% ( 74) 00:13:42.053 11796.480 - 11856.058: 76.1741% ( 63) 00:13:42.053 11856.058 - 11915.636: 76.7920% ( 70) 00:13:42.053 11915.636 - 11975.215: 77.3305% ( 61) 00:13:42.053 11975.215 - 12034.793: 77.8160% ( 55) 00:13:42.053 12034.793 - 12094.371: 78.3016% ( 55) 00:13:42.053 12094.371 - 12153.949: 78.7429% ( 50) 00:13:42.053 12153.949 - 12213.527: 79.2020% ( 52) 00:13:42.053 12213.527 - 12273.105: 79.6963% ( 56) 00:13:42.053 12273.105 - 12332.684: 80.1642% ( 53) 00:13:42.053 12332.684 - 12392.262: 80.6585% ( 56) 00:13:42.053 12392.262 - 12451.840: 81.2147% ( 63) 00:13:42.053 12451.840 - 12511.418: 81.6472% ( 49) 00:13:42.053 12511.418 - 12570.996: 82.0886% ( 50) 00:13:42.053 12570.996 - 12630.575: 82.4859% ( 45) 00:13:42.053 12630.575 - 12690.153: 82.8919% ( 46) 00:13:42.053 12690.153 - 12749.731: 83.2451% ( 40) 00:13:42.053 12749.731 - 12809.309: 83.6600% ( 47) 00:13:42.053 12809.309 - 12868.887: 84.0749% ( 47) 00:13:42.053 12868.887 - 12928.465: 84.4809% ( 46) 00:13:42.053 12928.465 - 12988.044: 84.9311% ( 51) 00:13:42.053 12988.044 - 13047.622: 85.3460% ( 47) 00:13:42.053 13047.622 - 13107.200: 85.7521% ( 46) 00:13:42.053 13107.200 - 13166.778: 86.1494% ( 45) 00:13:42.053 13166.778 - 13226.356: 86.5731% ( 48) 00:13:42.053 13226.356 - 13285.935: 86.9792% ( 46) 00:13:42.053 13285.935 - 13345.513: 87.3764% ( 45) 00:13:42.053 13345.513 - 13405.091: 87.7825% ( 46) 00:13:42.053 13405.091 - 13464.669: 88.1886% ( 46) 00:13:42.053 13464.669 - 13524.247: 88.5858% ( 45) 00:13:42.053 13524.247 - 13583.825: 88.9919% ( 46) 00:13:42.053 13583.825 - 13643.404: 89.3980% ( 46) 00:13:42.053 13643.404 - 13702.982: 89.7775% ( 43) 00:13:42.053 13702.982 - 13762.560: 90.1571% ( 43) 00:13:42.053 13762.560 - 13822.138: 90.5544% ( 45) 00:13:42.053 13822.138 - 13881.716: 90.9605% ( 46) 00:13:42.053 13881.716 - 13941.295: 91.3930% ( 49) 00:13:42.053 13941.295 - 14000.873: 91.8520% ( 52) 00:13:42.053 14000.873 - 14060.451: 92.3464% ( 56) 00:13:42.053 14060.451 - 14120.029: 92.7790% ( 49) 00:13:42.053 14120.029 - 14179.607: 93.2733% ( 56) 00:13:42.053 14179.607 - 14239.185: 93.7588% ( 55) 00:13:42.053 14239.185 - 14298.764: 94.2090% ( 51) 00:13:42.053 14298.764 - 14358.342: 94.6593% ( 51) 00:13:42.053 14358.342 - 14417.920: 95.0565% ( 45) 00:13:42.053 14417.920 - 14477.498: 95.4361% ( 43) 00:13:42.053 14477.498 - 14537.076: 95.7715% ( 38) 00:13:42.053 14537.076 - 14596.655: 96.0805% ( 35) 00:13:42.053 14596.655 - 14656.233: 96.3542% ( 31) 00:13:42.053 14656.233 - 14715.811: 96.6190% ( 30) 00:13:42.053 14715.811 - 14775.389: 96.8485% ( 26) 00:13:42.053 14775.389 - 14834.967: 97.0780% ( 26) 00:13:42.053 14834.967 - 14894.545: 97.2634% ( 21) 00:13:42.053 14894.545 - 14954.124: 97.4753% ( 24) 00:13:42.053 14954.124 - 15013.702: 97.6518% ( 20) 00:13:42.053 15013.702 - 15073.280: 97.7931% ( 16) 00:13:42.053 15073.280 - 15132.858: 97.9078% ( 13) 00:13:42.053 15132.858 - 15192.436: 98.0049% ( 11) 00:13:42.053 15192.436 - 15252.015: 98.0667% ( 7) 00:13:42.053 15252.015 - 15371.171: 98.1197% ( 6) 00:13:42.053 15371.171 - 15490.327: 98.1903% ( 8) 00:13:42.053 15490.327 - 15609.484: 98.2521% ( 7) 00:13:42.053 15609.484 - 15728.640: 98.3051% ( 6) 00:13:42.053 15847.796 - 15966.953: 98.3492% ( 5) 00:13:42.053 15966.953 - 16086.109: 98.3934% ( 5) 00:13:42.053 16086.109 - 16205.265: 98.4640% ( 8) 00:13:42.053 16205.265 - 16324.422: 98.5346% ( 8) 00:13:42.053 16324.422 - 16443.578: 98.5876% ( 6) 00:13:42.053 16443.578 - 16562.735: 98.6670% ( 9) 00:13:42.053 16562.735 - 16681.891: 98.7288% ( 7) 00:13:42.053 16681.891 - 16801.047: 98.7818% ( 6) 00:13:42.053 16801.047 - 16920.204: 98.8436% ( 7) 00:13:42.053 16920.204 - 17039.360: 98.8701% ( 3) 00:13:42.053 31218.967 - 31457.280: 98.9054% ( 4) 00:13:42.053 31457.280 - 31695.593: 98.9495% ( 5) 00:13:42.053 31695.593 - 31933.905: 98.9848% ( 4) 00:13:42.053 31933.905 - 32172.218: 99.0201% ( 4) 00:13:42.053 32172.218 - 32410.531: 99.0378% ( 2) 00:13:42.053 32410.531 - 32648.844: 99.0819% ( 5) 00:13:42.053 32648.844 - 32887.156: 99.1261% ( 5) 00:13:42.053 32887.156 - 33125.469: 99.1790% ( 6) 00:13:42.053 33125.469 - 33363.782: 99.2232% ( 5) 00:13:42.053 33363.782 - 33602.095: 99.2673% ( 5) 00:13:42.053 33602.095 - 33840.407: 99.3114% ( 5) 00:13:42.053 33840.407 - 34078.720: 99.3644% ( 6) 00:13:42.053 34078.720 - 34317.033: 99.4085% ( 5) 00:13:42.053 34317.033 - 34555.345: 99.4350% ( 3) 00:13:42.053 39559.913 - 39798.225: 99.4615% ( 3) 00:13:42.053 39798.225 - 40036.538: 99.5145% ( 6) 00:13:42.053 40036.538 - 40274.851: 99.5674% ( 6) 00:13:42.053 40274.851 - 40513.164: 99.6204% ( 6) 00:13:42.053 40513.164 - 40751.476: 99.6734% ( 6) 00:13:42.053 40751.476 - 40989.789: 99.7263% ( 6) 00:13:42.053 40989.789 - 41228.102: 99.7793% ( 6) 00:13:42.054 41228.102 - 41466.415: 99.8234% ( 5) 00:13:42.054 41466.415 - 41704.727: 99.8499% ( 3) 00:13:42.054 41704.727 - 41943.040: 99.9029% ( 6) 00:13:42.054 41943.040 - 42181.353: 99.9559% ( 6) 00:13:42.054 42181.353 - 42419.665: 100.0000% ( 5) 00:13:42.054 00:13:42.054 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:42.054 ============================================================================== 00:13:42.054 Range in us Cumulative IO count 00:13:42.054 8460.102 - 8519.680: 0.1148% ( 13) 00:13:42.054 8519.680 - 8579.258: 0.2648% ( 17) 00:13:42.054 8579.258 - 8638.836: 0.6532% ( 44) 00:13:42.054 8638.836 - 8698.415: 0.9622% ( 35) 00:13:42.054 8698.415 - 8757.993: 1.4124% ( 51) 00:13:42.054 8757.993 - 8817.571: 2.0392% ( 71) 00:13:42.054 8817.571 - 8877.149: 2.7366% ( 79) 00:13:42.054 8877.149 - 8936.727: 3.5664% ( 94) 00:13:42.054 8936.727 - 8996.305: 4.4756% ( 103) 00:13:42.054 8996.305 - 9055.884: 5.3849% ( 103) 00:13:42.054 9055.884 - 9115.462: 6.6737% ( 146) 00:13:42.054 9115.462 - 9175.040: 7.8478% ( 133) 00:13:42.054 9175.040 - 9234.618: 9.2161% ( 155) 00:13:42.054 9234.618 - 9294.196: 10.6285% ( 160) 00:13:42.054 9294.196 - 9353.775: 12.2528% ( 184) 00:13:42.054 9353.775 - 9413.353: 13.8506% ( 181) 00:13:42.054 9413.353 - 9472.931: 15.5102% ( 188) 00:13:42.054 9472.931 - 9532.509: 17.2140% ( 193) 00:13:42.054 9532.509 - 9592.087: 19.0060% ( 203) 00:13:42.054 9592.087 - 9651.665: 20.7715% ( 200) 00:13:42.054 9651.665 - 9711.244: 22.4841% ( 194) 00:13:42.054 9711.244 - 9770.822: 24.3909% ( 216) 00:13:42.054 9770.822 - 9830.400: 26.3153% ( 218) 00:13:42.054 9830.400 - 9889.978: 28.3104% ( 226) 00:13:42.054 9889.978 - 9949.556: 30.2436% ( 219) 00:13:42.054 9949.556 - 10009.135: 32.1681% ( 218) 00:13:42.054 10009.135 - 10068.713: 34.1278% ( 222) 00:13:42.054 10068.713 - 10128.291: 35.9993% ( 212) 00:13:42.054 10128.291 - 10187.869: 37.9855% ( 225) 00:13:42.054 10187.869 - 10247.447: 39.9188% ( 219) 00:13:42.054 10247.447 - 10307.025: 41.9050% ( 225) 00:13:42.054 10307.025 - 10366.604: 43.8383% ( 219) 00:13:42.054 10366.604 - 10426.182: 45.9216% ( 236) 00:13:42.054 10426.182 - 10485.760: 47.9255% ( 227) 00:13:42.054 10485.760 - 10545.338: 49.8588% ( 219) 00:13:42.054 10545.338 - 10604.916: 51.6773% ( 206) 00:13:42.054 10604.916 - 10664.495: 53.5840% ( 216) 00:13:42.054 10664.495 - 10724.073: 55.3761% ( 203) 00:13:42.054 10724.073 - 10783.651: 57.0004% ( 184) 00:13:42.054 10783.651 - 10843.229: 58.5629% ( 177) 00:13:42.054 10843.229 - 10902.807: 60.2313% ( 189) 00:13:42.054 10902.807 - 10962.385: 61.6790% ( 164) 00:13:42.054 10962.385 - 11021.964: 63.0208% ( 152) 00:13:42.054 11021.964 - 11081.542: 64.2920% ( 144) 00:13:42.054 11081.542 - 11141.120: 65.4926% ( 136) 00:13:42.054 11141.120 - 11200.698: 66.6137% ( 127) 00:13:42.054 11200.698 - 11260.276: 67.6819% ( 121) 00:13:42.054 11260.276 - 11319.855: 68.7412% ( 120) 00:13:42.054 11319.855 - 11379.433: 69.5975% ( 97) 00:13:42.054 11379.433 - 11439.011: 70.4626% ( 98) 00:13:42.054 11439.011 - 11498.589: 71.2041% ( 84) 00:13:42.054 11498.589 - 11558.167: 71.8927% ( 78) 00:13:42.054 11558.167 - 11617.745: 72.5194% ( 71) 00:13:42.054 11617.745 - 11677.324: 73.1992% ( 77) 00:13:42.054 11677.324 - 11736.902: 73.8877% ( 78) 00:13:42.054 11736.902 - 11796.480: 74.5939% ( 80) 00:13:42.054 11796.480 - 11856.058: 75.2119% ( 70) 00:13:42.054 11856.058 - 11915.636: 75.8033% ( 67) 00:13:42.054 11915.636 - 11975.215: 76.2624% ( 52) 00:13:42.054 11975.215 - 12034.793: 76.7479% ( 55) 00:13:42.054 12034.793 - 12094.371: 77.2246% ( 54) 00:13:42.054 12094.371 - 12153.949: 77.7278% ( 57) 00:13:42.054 12153.949 - 12213.527: 78.2044% ( 54) 00:13:42.054 12213.527 - 12273.105: 78.6282% ( 48) 00:13:42.054 12273.105 - 12332.684: 79.0519% ( 48) 00:13:42.054 12332.684 - 12392.262: 79.4933% ( 50) 00:13:42.054 12392.262 - 12451.840: 80.0053% ( 58) 00:13:42.054 12451.840 - 12511.418: 80.4996% ( 56) 00:13:42.054 12511.418 - 12570.996: 81.0470% ( 62) 00:13:42.054 12570.996 - 12630.575: 81.5413% ( 56) 00:13:42.054 12630.575 - 12690.153: 81.9827% ( 50) 00:13:42.054 12690.153 - 12749.731: 82.4682% ( 55) 00:13:42.054 12749.731 - 12809.309: 82.8655% ( 45) 00:13:42.054 12809.309 - 12868.887: 83.3686% ( 57) 00:13:42.054 12868.887 - 12928.465: 83.8630% ( 56) 00:13:42.054 12928.465 - 12988.044: 84.3044% ( 50) 00:13:42.054 12988.044 - 13047.622: 84.7987% ( 56) 00:13:42.054 13047.622 - 13107.200: 85.2489% ( 51) 00:13:42.054 13107.200 - 13166.778: 85.7433% ( 56) 00:13:42.054 13166.778 - 13226.356: 86.2553% ( 58) 00:13:42.054 13226.356 - 13285.935: 86.7673% ( 58) 00:13:42.054 13285.935 - 13345.513: 87.3058% ( 61) 00:13:42.054 13345.513 - 13405.091: 87.8531% ( 62) 00:13:42.054 13405.091 - 13464.669: 88.3121% ( 52) 00:13:42.054 13464.669 - 13524.247: 88.7270% ( 47) 00:13:42.054 13524.247 - 13583.825: 89.2037% ( 54) 00:13:42.054 13583.825 - 13643.404: 89.6098% ( 46) 00:13:42.054 13643.404 - 13702.982: 90.0335% ( 48) 00:13:42.054 13702.982 - 13762.560: 90.4043% ( 42) 00:13:42.054 13762.560 - 13822.138: 90.8192% ( 47) 00:13:42.054 13822.138 - 13881.716: 91.2076% ( 44) 00:13:42.054 13881.716 - 13941.295: 91.6755% ( 53) 00:13:42.054 13941.295 - 14000.873: 92.1345% ( 52) 00:13:42.054 14000.873 - 14060.451: 92.5759% ( 50) 00:13:42.054 14060.451 - 14120.029: 93.0261% ( 51) 00:13:42.054 14120.029 - 14179.607: 93.4675% ( 50) 00:13:42.054 14179.607 - 14239.185: 93.9442% ( 54) 00:13:42.054 14239.185 - 14298.764: 94.3944% ( 51) 00:13:42.054 14298.764 - 14358.342: 94.8093% ( 47) 00:13:42.054 14358.342 - 14417.920: 95.1624% ( 40) 00:13:42.054 14417.920 - 14477.498: 95.5332% ( 42) 00:13:42.054 14477.498 - 14537.076: 95.8863% ( 40) 00:13:42.054 14537.076 - 14596.655: 96.1776% ( 33) 00:13:42.054 14596.655 - 14656.233: 96.4513% ( 31) 00:13:42.054 14656.233 - 14715.811: 96.6984% ( 28) 00:13:42.054 14715.811 - 14775.389: 96.9456% ( 28) 00:13:42.054 14775.389 - 14834.967: 97.1751% ( 26) 00:13:42.054 14834.967 - 14894.545: 97.4135% ( 27) 00:13:42.054 14894.545 - 14954.124: 97.5900% ( 20) 00:13:42.054 14954.124 - 15013.702: 97.8019% ( 24) 00:13:42.054 15013.702 - 15073.280: 97.9873% ( 21) 00:13:42.054 15073.280 - 15132.858: 98.1462% ( 18) 00:13:42.054 15132.858 - 15192.436: 98.2698% ( 14) 00:13:42.054 15192.436 - 15252.015: 98.3492% ( 9) 00:13:42.054 15252.015 - 15371.171: 98.4816% ( 15) 00:13:42.054 15371.171 - 15490.327: 98.6141% ( 15) 00:13:42.054 15490.327 - 15609.484: 98.6935% ( 9) 00:13:42.054 15609.484 - 15728.640: 98.7641% ( 8) 00:13:42.054 15728.640 - 15847.796: 98.8347% ( 8) 00:13:42.054 15847.796 - 15966.953: 98.8701% ( 4) 00:13:42.054 27763.433 - 27882.589: 98.8789% ( 1) 00:13:42.054 27882.589 - 28001.745: 98.8965% ( 2) 00:13:42.054 28001.745 - 28120.902: 98.9230% ( 3) 00:13:42.054 28120.902 - 28240.058: 98.9495% ( 3) 00:13:42.054 28240.058 - 28359.215: 98.9760% ( 3) 00:13:42.054 28359.215 - 28478.371: 98.9936% ( 2) 00:13:42.054 28478.371 - 28597.527: 99.0201% ( 3) 00:13:42.054 28597.527 - 28716.684: 99.0378% ( 2) 00:13:42.054 28716.684 - 28835.840: 99.0643% ( 3) 00:13:42.054 28835.840 - 28954.996: 99.0819% ( 2) 00:13:42.054 28954.996 - 29074.153: 99.0996% ( 2) 00:13:42.054 29074.153 - 29193.309: 99.1261% ( 3) 00:13:42.054 29193.309 - 29312.465: 99.1525% ( 3) 00:13:42.054 29312.465 - 29431.622: 99.1702% ( 2) 00:13:42.054 29431.622 - 29550.778: 99.1967% ( 3) 00:13:42.054 29550.778 - 29669.935: 99.2143% ( 2) 00:13:42.054 29669.935 - 29789.091: 99.2408% ( 3) 00:13:42.054 29789.091 - 29908.247: 99.2585% ( 2) 00:13:42.054 29908.247 - 30027.404: 99.2850% ( 3) 00:13:42.054 30027.404 - 30146.560: 99.3026% ( 2) 00:13:42.054 30146.560 - 30265.716: 99.3291% ( 3) 00:13:42.054 30265.716 - 30384.873: 99.3556% ( 3) 00:13:42.054 30384.873 - 30504.029: 99.3821% ( 3) 00:13:42.054 30504.029 - 30742.342: 99.4262% ( 5) 00:13:42.054 30742.342 - 30980.655: 99.4350% ( 1) 00:13:42.054 35746.909 - 35985.222: 99.4792% ( 5) 00:13:42.054 35985.222 - 36223.535: 99.5321% ( 6) 00:13:42.054 36223.535 - 36461.847: 99.5763% ( 5) 00:13:42.054 36461.847 - 36700.160: 99.6116% ( 4) 00:13:42.055 36700.160 - 36938.473: 99.6645% ( 6) 00:13:42.055 36938.473 - 37176.785: 99.7087% ( 5) 00:13:42.055 37176.785 - 37415.098: 99.7528% ( 5) 00:13:42.055 37415.098 - 37653.411: 99.7970% ( 5) 00:13:42.055 37653.411 - 37891.724: 99.8499% ( 6) 00:13:42.055 37891.724 - 38130.036: 99.8852% ( 4) 00:13:42.055 38130.036 - 38368.349: 99.9382% ( 6) 00:13:42.055 38368.349 - 38606.662: 99.9912% ( 6) 00:13:42.055 38606.662 - 38844.975: 100.0000% ( 1) 00:13:42.055 00:13:42.055 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:42.055 ============================================================================== 00:13:42.055 Range in us Cumulative IO count 00:13:42.055 8400.524 - 8460.102: 0.0530% ( 6) 00:13:42.055 8460.102 - 8519.680: 0.1059% ( 6) 00:13:42.055 8519.680 - 8579.258: 0.3796% ( 31) 00:13:42.055 8579.258 - 8638.836: 0.7239% ( 39) 00:13:42.055 8638.836 - 8698.415: 1.1388% ( 47) 00:13:42.055 8698.415 - 8757.993: 1.6419% ( 57) 00:13:42.055 8757.993 - 8817.571: 2.1804% ( 61) 00:13:42.055 8817.571 - 8877.149: 2.8602% ( 77) 00:13:42.055 8877.149 - 8936.727: 3.6194% ( 86) 00:13:42.055 8936.727 - 8996.305: 4.5286% ( 103) 00:13:42.055 8996.305 - 9055.884: 5.5879% ( 120) 00:13:42.055 9055.884 - 9115.462: 6.6561% ( 121) 00:13:42.055 9115.462 - 9175.040: 7.8125% ( 131) 00:13:42.055 9175.040 - 9234.618: 9.2073% ( 158) 00:13:42.055 9234.618 - 9294.196: 10.6374% ( 162) 00:13:42.055 9294.196 - 9353.775: 12.1028% ( 166) 00:13:42.055 9353.775 - 9413.353: 13.5946% ( 169) 00:13:42.055 9413.353 - 9472.931: 15.1042% ( 171) 00:13:42.055 9472.931 - 9532.509: 16.8697% ( 200) 00:13:42.055 9532.509 - 9592.087: 18.5558% ( 191) 00:13:42.055 9592.087 - 9651.665: 20.3831% ( 207) 00:13:42.055 9651.665 - 9711.244: 22.0427% ( 188) 00:13:42.055 9711.244 - 9770.822: 23.8701% ( 207) 00:13:42.055 9770.822 - 9830.400: 25.8651% ( 226) 00:13:42.055 9830.400 - 9889.978: 27.8513% ( 225) 00:13:42.055 9889.978 - 9949.556: 29.9170% ( 234) 00:13:42.055 9949.556 - 10009.135: 32.0004% ( 236) 00:13:42.055 10009.135 - 10068.713: 34.0572% ( 233) 00:13:42.055 10068.713 - 10128.291: 36.0699% ( 228) 00:13:42.055 10128.291 - 10187.869: 38.0826% ( 228) 00:13:42.055 10187.869 - 10247.447: 40.1218% ( 231) 00:13:42.055 10247.447 - 10307.025: 42.0639% ( 220) 00:13:42.055 10307.025 - 10366.604: 44.1737% ( 239) 00:13:42.055 10366.604 - 10426.182: 46.2394% ( 234) 00:13:42.055 10426.182 - 10485.760: 48.1727% ( 219) 00:13:42.055 10485.760 - 10545.338: 50.1766% ( 227) 00:13:42.055 10545.338 - 10604.916: 52.0657% ( 214) 00:13:42.055 10604.916 - 10664.495: 53.9725% ( 216) 00:13:42.055 10664.495 - 10724.073: 55.7380% ( 200) 00:13:42.055 10724.073 - 10783.651: 57.3976% ( 188) 00:13:42.055 10783.651 - 10843.229: 58.9778% ( 179) 00:13:42.055 10843.229 - 10902.807: 60.6638% ( 191) 00:13:42.055 10902.807 - 10962.385: 62.0851% ( 161) 00:13:42.055 10962.385 - 11021.964: 63.5505% ( 166) 00:13:42.055 11021.964 - 11081.542: 64.8570% ( 148) 00:13:42.055 11081.542 - 11141.120: 66.0222% ( 132) 00:13:42.055 11141.120 - 11200.698: 67.0286% ( 114) 00:13:42.055 11200.698 - 11260.276: 67.9820% ( 108) 00:13:42.055 11260.276 - 11319.855: 68.9354% ( 108) 00:13:42.055 11319.855 - 11379.433: 69.7652% ( 94) 00:13:42.055 11379.433 - 11439.011: 70.5597% ( 90) 00:13:42.055 11439.011 - 11498.589: 71.2835% ( 82) 00:13:42.055 11498.589 - 11558.167: 71.9809% ( 79) 00:13:42.055 11558.167 - 11617.745: 72.6342% ( 74) 00:13:42.055 11617.745 - 11677.324: 73.3139% ( 77) 00:13:42.055 11677.324 - 11736.902: 73.9407% ( 71) 00:13:42.055 11736.902 - 11796.480: 74.4703% ( 60) 00:13:42.055 11796.480 - 11856.058: 75.0353% ( 64) 00:13:42.055 11856.058 - 11915.636: 75.6179% ( 66) 00:13:42.055 11915.636 - 11975.215: 76.1123% ( 56) 00:13:42.055 11975.215 - 12034.793: 76.5713% ( 52) 00:13:42.055 12034.793 - 12094.371: 77.0304% ( 52) 00:13:42.055 12094.371 - 12153.949: 77.4806% ( 51) 00:13:42.055 12153.949 - 12213.527: 77.9131% ( 49) 00:13:42.055 12213.527 - 12273.105: 78.3633% ( 51) 00:13:42.055 12273.105 - 12332.684: 78.8400% ( 54) 00:13:42.055 12332.684 - 12392.262: 79.3785% ( 61) 00:13:42.055 12392.262 - 12451.840: 79.8817% ( 57) 00:13:42.055 12451.840 - 12511.418: 80.3761% ( 56) 00:13:42.055 12511.418 - 12570.996: 80.8263% ( 51) 00:13:42.055 12570.996 - 12630.575: 81.2765% ( 51) 00:13:42.055 12630.575 - 12690.153: 81.7532% ( 54) 00:13:42.055 12690.153 - 12749.731: 82.2652% ( 58) 00:13:42.055 12749.731 - 12809.309: 82.8390% ( 65) 00:13:42.055 12809.309 - 12868.887: 83.4304% ( 67) 00:13:42.055 12868.887 - 12928.465: 84.0042% ( 65) 00:13:42.055 12928.465 - 12988.044: 84.4633% ( 52) 00:13:42.055 12988.044 - 13047.622: 84.9311% ( 53) 00:13:42.055 13047.622 - 13107.200: 85.3284% ( 45) 00:13:42.055 13107.200 - 13166.778: 85.8316% ( 57) 00:13:42.055 13166.778 - 13226.356: 86.3612% ( 60) 00:13:42.055 13226.356 - 13285.935: 86.8732% ( 58) 00:13:42.055 13285.935 - 13345.513: 87.3588% ( 55) 00:13:42.055 13345.513 - 13405.091: 87.8001% ( 50) 00:13:42.055 13405.091 - 13464.669: 88.2504% ( 51) 00:13:42.055 13464.669 - 13524.247: 88.7182% ( 53) 00:13:42.055 13524.247 - 13583.825: 89.2037% ( 55) 00:13:42.055 13583.825 - 13643.404: 89.6540% ( 51) 00:13:42.055 13643.404 - 13702.982: 90.0512% ( 45) 00:13:42.055 13702.982 - 13762.560: 90.4484% ( 45) 00:13:42.055 13762.560 - 13822.138: 90.8633% ( 47) 00:13:42.055 13822.138 - 13881.716: 91.3047% ( 50) 00:13:42.055 13881.716 - 13941.295: 91.7196% ( 47) 00:13:42.055 13941.295 - 14000.873: 92.1257% ( 46) 00:13:42.055 14000.873 - 14060.451: 92.4876% ( 41) 00:13:42.055 14060.451 - 14120.029: 92.9732% ( 55) 00:13:42.055 14120.029 - 14179.607: 93.4675% ( 56) 00:13:42.055 14179.607 - 14239.185: 93.8912% ( 48) 00:13:42.055 14239.185 - 14298.764: 94.3326% ( 50) 00:13:42.055 14298.764 - 14358.342: 94.7299% ( 45) 00:13:42.055 14358.342 - 14417.920: 95.0742% ( 39) 00:13:42.055 14417.920 - 14477.498: 95.3655% ( 33) 00:13:42.055 14477.498 - 14537.076: 95.6480% ( 32) 00:13:42.055 14537.076 - 14596.655: 95.9304% ( 32) 00:13:42.055 14596.655 - 14656.233: 96.2129% ( 32) 00:13:42.055 14656.233 - 14715.811: 96.4689% ( 29) 00:13:42.055 14715.811 - 14775.389: 96.7073% ( 27) 00:13:42.055 14775.389 - 14834.967: 96.9544% ( 28) 00:13:42.055 14834.967 - 14894.545: 97.1928% ( 27) 00:13:42.055 14894.545 - 14954.124: 97.4047% ( 24) 00:13:42.055 14954.124 - 15013.702: 97.6077% ( 23) 00:13:42.055 15013.702 - 15073.280: 97.7666% ( 18) 00:13:42.055 15073.280 - 15132.858: 97.9078% ( 16) 00:13:42.055 15132.858 - 15192.436: 97.9608% ( 6) 00:13:42.055 15192.436 - 15252.015: 98.0314% ( 8) 00:13:42.055 15252.015 - 15371.171: 98.1727% ( 16) 00:13:42.055 15371.171 - 15490.327: 98.2963% ( 14) 00:13:42.055 15490.327 - 15609.484: 98.4110% ( 13) 00:13:42.055 15609.484 - 15728.640: 98.5346% ( 14) 00:13:42.055 15728.640 - 15847.796: 98.6494% ( 13) 00:13:42.055 15847.796 - 15966.953: 98.7200% ( 8) 00:13:42.055 15966.953 - 16086.109: 98.7994% ( 9) 00:13:42.055 16086.109 - 16205.265: 98.8701% ( 8) 00:13:42.055 24784.524 - 24903.680: 98.8877% ( 2) 00:13:42.055 24903.680 - 25022.836: 98.9054% ( 2) 00:13:42.055 25022.836 - 25141.993: 98.9319% ( 3) 00:13:42.055 25141.993 - 25261.149: 98.9583% ( 3) 00:13:42.055 25261.149 - 25380.305: 98.9760% ( 2) 00:13:42.055 25380.305 - 25499.462: 99.0025% ( 3) 00:13:42.055 25499.462 - 25618.618: 99.0290% ( 3) 00:13:42.055 25618.618 - 25737.775: 99.0554% ( 3) 00:13:42.055 25737.775 - 25856.931: 99.0819% ( 3) 00:13:42.055 25856.931 - 25976.087: 99.0996% ( 2) 00:13:42.055 25976.087 - 26095.244: 99.1261% ( 3) 00:13:42.055 26095.244 - 26214.400: 99.1525% ( 3) 00:13:42.055 26214.400 - 26333.556: 99.1790% ( 3) 00:13:42.055 26333.556 - 26452.713: 99.1967% ( 2) 00:13:42.055 26452.713 - 26571.869: 99.2232% ( 3) 00:13:42.055 26571.869 - 26691.025: 99.2496% ( 3) 00:13:42.055 26691.025 - 26810.182: 99.2673% ( 2) 00:13:42.055 26810.182 - 26929.338: 99.2938% ( 3) 00:13:42.055 26929.338 - 27048.495: 99.3203% ( 3) 00:13:42.055 27048.495 - 27167.651: 99.3379% ( 2) 00:13:42.055 27167.651 - 27286.807: 99.3644% ( 3) 00:13:42.055 27286.807 - 27405.964: 99.3909% ( 3) 00:13:42.055 27405.964 - 27525.120: 99.4085% ( 2) 00:13:42.055 27525.120 - 27644.276: 99.4350% ( 3) 00:13:42.055 32410.531 - 32648.844: 99.4703% ( 4) 00:13:42.055 32648.844 - 32887.156: 99.5233% ( 6) 00:13:42.055 32887.156 - 33125.469: 99.5763% ( 6) 00:13:42.055 33125.469 - 33363.782: 99.6292% ( 6) 00:13:42.055 33363.782 - 33602.095: 99.6734% ( 5) 00:13:42.055 33602.095 - 33840.407: 99.7263% ( 6) 00:13:42.055 33840.407 - 34078.720: 99.7793% ( 6) 00:13:42.055 34078.720 - 34317.033: 99.8323% ( 6) 00:13:42.055 34317.033 - 34555.345: 99.8852% ( 6) 00:13:42.055 34555.345 - 34793.658: 99.9294% ( 5) 00:13:42.055 34793.658 - 35031.971: 99.9912% ( 7) 00:13:42.055 35031.971 - 35270.284: 100.0000% ( 1) 00:13:42.055 00:13:42.055 10:19:36 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:13:43.431 Initializing NVMe Controllers 00:13:43.431 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:43.431 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:43.431 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:43.431 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:43.431 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:43.431 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:43.431 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:43.431 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:43.431 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:43.431 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:43.431 Initialization complete. Launching workers. 00:13:43.431 ======================================================== 00:13:43.431 Latency(us) 00:13:43.431 Device Information : IOPS MiB/s Average min max 00:13:43.431 PCIE (0000:00:10.0) NSID 1 from core 0: 10541.28 123.53 12176.89 9695.43 44741.45 00:13:43.431 PCIE (0000:00:11.0) NSID 1 from core 0: 10541.28 123.53 12150.58 9863.51 41857.86 00:13:43.431 PCIE (0000:00:13.0) NSID 1 from core 0: 10541.28 123.53 12123.76 10040.01 39979.08 00:13:43.431 PCIE (0000:00:12.0) NSID 1 from core 0: 10541.28 123.53 12096.33 9871.23 37425.31 00:13:43.431 PCIE (0000:00:12.0) NSID 2 from core 0: 10541.28 123.53 12068.19 9786.87 34732.51 00:13:43.431 PCIE (0000:00:12.0) NSID 3 from core 0: 10541.28 123.53 12039.69 9834.38 31969.56 00:13:43.431 ======================================================== 00:13:43.431 Total : 63247.67 741.18 12109.24 9695.43 44741.45 00:13:43.431 00:13:43.431 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:43.432 ================================================================================= 00:13:43.432 1.00000% : 10187.869us 00:13:43.432 10.00000% : 10843.229us 00:13:43.432 25.00000% : 11200.698us 00:13:43.432 50.00000% : 11677.324us 00:13:43.432 75.00000% : 12332.684us 00:13:43.432 90.00000% : 13345.513us 00:13:43.432 95.00000% : 14656.233us 00:13:43.432 98.00000% : 15490.327us 00:13:43.432 99.00000% : 33602.095us 00:13:43.432 99.50000% : 42657.978us 00:13:43.432 99.90000% : 44326.167us 00:13:43.432 99.99000% : 44802.793us 00:13:43.432 99.99900% : 44802.793us 00:13:43.432 99.99990% : 44802.793us 00:13:43.432 99.99999% : 44802.793us 00:13:43.432 00:13:43.432 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:43.432 ================================================================================= 00:13:43.432 1.00000% : 10366.604us 00:13:43.432 10.00000% : 10962.385us 00:13:43.432 25.00000% : 11260.276us 00:13:43.432 50.00000% : 11677.324us 00:13:43.432 75.00000% : 12213.527us 00:13:43.432 90.00000% : 13166.778us 00:13:43.432 95.00000% : 14656.233us 00:13:43.432 98.00000% : 15192.436us 00:13:43.432 99.00000% : 32172.218us 00:13:43.432 99.50000% : 39798.225us 00:13:43.432 99.90000% : 41466.415us 00:13:43.432 99.99000% : 41943.040us 00:13:43.432 99.99900% : 41943.040us 00:13:43.432 99.99990% : 41943.040us 00:13:43.432 99.99999% : 41943.040us 00:13:43.432 00:13:43.432 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:43.432 ================================================================================= 00:13:43.432 1.00000% : 10307.025us 00:13:43.432 10.00000% : 10962.385us 00:13:43.432 25.00000% : 11319.855us 00:13:43.432 50.00000% : 11677.324us 00:13:43.432 75.00000% : 12213.527us 00:13:43.432 90.00000% : 13047.622us 00:13:43.432 95.00000% : 14596.655us 00:13:43.432 98.00000% : 15371.171us 00:13:43.432 99.00000% : 30384.873us 00:13:43.432 99.50000% : 38130.036us 00:13:43.432 99.90000% : 39798.225us 00:13:43.432 99.99000% : 40036.538us 00:13:43.432 99.99900% : 40036.538us 00:13:43.432 99.99990% : 40036.538us 00:13:43.432 99.99999% : 40036.538us 00:13:43.432 00:13:43.432 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:43.432 ================================================================================= 00:13:43.432 1.00000% : 10307.025us 00:13:43.432 10.00000% : 10962.385us 00:13:43.432 25.00000% : 11319.855us 00:13:43.432 50.00000% : 11677.324us 00:13:43.432 75.00000% : 12153.949us 00:13:43.432 90.00000% : 13047.622us 00:13:43.432 95.00000% : 14715.811us 00:13:43.432 98.00000% : 15371.171us 00:13:43.432 99.00000% : 27882.589us 00:13:43.432 99.50000% : 33840.407us 00:13:43.432 99.90000% : 37176.785us 00:13:43.432 99.99000% : 37415.098us 00:13:43.432 99.99900% : 37653.411us 00:13:43.432 99.99990% : 37653.411us 00:13:43.432 99.99999% : 37653.411us 00:13:43.432 00:13:43.432 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:43.432 ================================================================================= 00:13:43.432 1.00000% : 10307.025us 00:13:43.432 10.00000% : 10962.385us 00:13:43.432 25.00000% : 11319.855us 00:13:43.432 50.00000% : 11677.324us 00:13:43.432 75.00000% : 12153.949us 00:13:43.432 90.00000% : 13107.200us 00:13:43.432 95.00000% : 14656.233us 00:13:43.432 98.00000% : 15490.327us 00:13:43.432 99.00000% : 25141.993us 00:13:43.432 99.50000% : 30980.655us 00:13:43.432 99.90000% : 34555.345us 00:13:43.432 99.99000% : 34793.658us 00:13:43.432 99.99900% : 34793.658us 00:13:43.432 99.99990% : 34793.658us 00:13:43.432 99.99999% : 34793.658us 00:13:43.432 00:13:43.432 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:43.432 ================================================================================= 00:13:43.432 1.00000% : 10426.182us 00:13:43.432 10.00000% : 11021.964us 00:13:43.432 25.00000% : 11319.855us 00:13:43.432 50.00000% : 11677.324us 00:13:43.432 75.00000% : 12153.949us 00:13:43.432 90.00000% : 13047.622us 00:13:43.432 95.00000% : 14775.389us 00:13:43.432 98.00000% : 15609.484us 00:13:43.432 99.00000% : 22163.084us 00:13:43.432 99.50000% : 30027.404us 00:13:43.432 99.90000% : 31695.593us 00:13:43.432 99.99000% : 32172.218us 00:13:43.432 99.99900% : 32172.218us 00:13:43.432 99.99990% : 32172.218us 00:13:43.432 99.99999% : 32172.218us 00:13:43.432 00:13:43.432 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:43.432 ============================================================================== 00:13:43.432 Range in us Cumulative IO count 00:13:43.432 9651.665 - 9711.244: 0.0095% ( 1) 00:13:43.432 9770.822 - 9830.400: 0.0284% ( 2) 00:13:43.432 9889.978 - 9949.556: 0.1042% ( 8) 00:13:43.432 9949.556 - 10009.135: 0.1989% ( 10) 00:13:43.432 10009.135 - 10068.713: 0.4924% ( 31) 00:13:43.432 10068.713 - 10128.291: 0.7292% ( 25) 00:13:43.432 10128.291 - 10187.869: 1.1174% ( 41) 00:13:43.432 10187.869 - 10247.447: 1.4015% ( 30) 00:13:43.432 10247.447 - 10307.025: 1.7045% ( 32) 00:13:43.432 10307.025 - 10366.604: 2.0549% ( 37) 00:13:43.432 10366.604 - 10426.182: 2.5852% ( 56) 00:13:43.432 10426.182 - 10485.760: 3.4280% ( 89) 00:13:43.432 10485.760 - 10545.338: 4.1004% ( 71) 00:13:43.432 10545.338 - 10604.916: 4.8295% ( 77) 00:13:43.432 10604.916 - 10664.495: 5.8333% ( 106) 00:13:43.432 10664.495 - 10724.073: 7.2348% ( 148) 00:13:43.432 10724.073 - 10783.651: 8.8542% ( 171) 00:13:43.432 10783.651 - 10843.229: 11.1174% ( 239) 00:13:43.432 10843.229 - 10902.807: 13.3712% ( 238) 00:13:43.432 10902.807 - 10962.385: 15.1515% ( 188) 00:13:43.432 10962.385 - 11021.964: 17.6136% ( 260) 00:13:43.432 11021.964 - 11081.542: 20.1231% ( 265) 00:13:43.432 11081.542 - 11141.120: 22.6894% ( 271) 00:13:43.432 11141.120 - 11200.698: 25.8049% ( 329) 00:13:43.432 11200.698 - 11260.276: 28.7595% ( 312) 00:13:43.432 11260.276 - 11319.855: 32.6042% ( 406) 00:13:43.432 11319.855 - 11379.433: 35.5208% ( 308) 00:13:43.432 11379.433 - 11439.011: 38.9867% ( 366) 00:13:43.432 11439.011 - 11498.589: 42.3390% ( 354) 00:13:43.432 11498.589 - 11558.167: 45.3883% ( 322) 00:13:43.432 11558.167 - 11617.745: 48.6742% ( 347) 00:13:43.432 11617.745 - 11677.324: 51.6098% ( 310) 00:13:43.432 11677.324 - 11736.902: 54.8295% ( 340) 00:13:43.432 11736.902 - 11796.480: 57.3769% ( 269) 00:13:43.432 11796.480 - 11856.058: 60.0473% ( 282) 00:13:43.432 11856.058 - 11915.636: 62.6799% ( 278) 00:13:43.432 11915.636 - 11975.215: 64.7348% ( 217) 00:13:43.432 11975.215 - 12034.793: 66.8466% ( 223) 00:13:43.432 12034.793 - 12094.371: 68.8163% ( 208) 00:13:43.432 12094.371 - 12153.949: 70.7576% ( 205) 00:13:43.432 12153.949 - 12213.527: 72.8409% ( 220) 00:13:43.432 12213.527 - 12273.105: 74.8390% ( 211) 00:13:43.432 12273.105 - 12332.684: 76.7235% ( 199) 00:13:43.432 12332.684 - 12392.262: 78.5417% ( 192) 00:13:43.432 12392.262 - 12451.840: 79.9527% ( 149) 00:13:43.432 12451.840 - 12511.418: 81.3731% ( 150) 00:13:43.432 12511.418 - 12570.996: 82.4811% ( 117) 00:13:43.432 12570.996 - 12630.575: 83.2955% ( 86) 00:13:43.432 12630.575 - 12690.153: 84.0246% ( 77) 00:13:43.432 12690.153 - 12749.731: 84.8674% ( 89) 00:13:43.432 12749.731 - 12809.309: 85.4545% ( 62) 00:13:43.432 12809.309 - 12868.887: 86.2027% ( 79) 00:13:43.432 12868.887 - 12928.465: 86.9981% ( 84) 00:13:43.432 12928.465 - 12988.044: 87.5284% ( 56) 00:13:43.432 12988.044 - 13047.622: 88.0019% ( 50) 00:13:43.432 13047.622 - 13107.200: 88.4754% ( 50) 00:13:43.432 13107.200 - 13166.778: 88.9015% ( 45) 00:13:43.432 13166.778 - 13226.356: 89.4886% ( 62) 00:13:43.432 13226.356 - 13285.935: 89.9337% ( 47) 00:13:43.432 13285.935 - 13345.513: 90.4924% ( 59) 00:13:43.432 13345.513 - 13405.091: 90.8523% ( 38) 00:13:43.432 13405.091 - 13464.669: 91.1174% ( 28) 00:13:43.432 13464.669 - 13524.247: 91.3163% ( 21) 00:13:43.432 13524.247 - 13583.825: 91.5341% ( 23) 00:13:43.432 13583.825 - 13643.404: 91.7330% ( 21) 00:13:43.432 13643.404 - 13702.982: 91.9602% ( 24) 00:13:43.432 13702.982 - 13762.560: 92.1307% ( 18) 00:13:43.432 13762.560 - 13822.138: 92.3674% ( 25) 00:13:43.432 13822.138 - 13881.716: 92.5663% ( 21) 00:13:43.432 13881.716 - 13941.295: 92.8314% ( 28) 00:13:43.432 13941.295 - 14000.873: 93.0777% ( 26) 00:13:43.432 14000.873 - 14060.451: 93.1913% ( 12) 00:13:43.432 14060.451 - 14120.029: 93.3712% ( 19) 00:13:43.432 14120.029 - 14179.607: 93.4091% ( 4) 00:13:43.432 14179.607 - 14239.185: 93.4754% ( 7) 00:13:43.432 14239.185 - 14298.764: 93.5511% ( 8) 00:13:43.432 14298.764 - 14358.342: 93.6932% ( 15) 00:13:43.432 14358.342 - 14417.920: 93.8731% ( 19) 00:13:43.432 14417.920 - 14477.498: 94.1098% ( 25) 00:13:43.432 14477.498 - 14537.076: 94.4129% ( 32) 00:13:43.432 14537.076 - 14596.655: 94.6875% ( 29) 00:13:43.432 14596.655 - 14656.233: 95.1894% ( 53) 00:13:43.432 14656.233 - 14715.811: 95.6439% ( 48) 00:13:43.432 14715.811 - 14775.389: 95.9470% ( 32) 00:13:43.432 14775.389 - 14834.967: 96.1742% ( 24) 00:13:43.432 14834.967 - 14894.545: 96.3920% ( 23) 00:13:43.432 14894.545 - 14954.124: 96.6288% ( 25) 00:13:43.432 14954.124 - 15013.702: 96.8466% ( 23) 00:13:43.432 15013.702 - 15073.280: 97.0265% ( 19) 00:13:43.432 15073.280 - 15132.858: 97.2254% ( 21) 00:13:43.432 15132.858 - 15192.436: 97.3580% ( 14) 00:13:43.432 15192.436 - 15252.015: 97.5473% ( 20) 00:13:43.432 15252.015 - 15371.171: 97.9261% ( 40) 00:13:43.432 15371.171 - 15490.327: 98.2765% ( 37) 00:13:43.432 15490.327 - 15609.484: 98.5890% ( 33) 00:13:43.432 15609.484 - 15728.640: 98.7595% ( 18) 00:13:43.433 15728.640 - 15847.796: 98.7879% ( 3) 00:13:43.433 32648.844 - 32887.156: 98.8163% ( 3) 00:13:43.433 32887.156 - 33125.469: 98.9205% ( 11) 00:13:43.433 33125.469 - 33363.782: 98.9678% ( 5) 00:13:43.433 33363.782 - 33602.095: 99.0152% ( 5) 00:13:43.433 33602.095 - 33840.407: 99.0530% ( 4) 00:13:43.433 33840.407 - 34078.720: 99.0625% ( 1) 00:13:43.433 34078.720 - 34317.033: 99.1098% ( 5) 00:13:43.433 34317.033 - 34555.345: 99.1572% ( 5) 00:13:43.433 34555.345 - 34793.658: 99.2045% ( 5) 00:13:43.433 34793.658 - 35031.971: 99.2519% ( 5) 00:13:43.433 35031.971 - 35270.284: 99.2992% ( 5) 00:13:43.433 35270.284 - 35508.596: 99.3371% ( 4) 00:13:43.433 35508.596 - 35746.909: 99.3939% ( 6) 00:13:43.433 41943.040 - 42181.353: 99.4413% ( 5) 00:13:43.433 42181.353 - 42419.665: 99.4981% ( 6) 00:13:43.433 42419.665 - 42657.978: 99.5549% ( 6) 00:13:43.433 42657.978 - 42896.291: 99.6023% ( 5) 00:13:43.433 42896.291 - 43134.604: 99.6591% ( 6) 00:13:43.433 43134.604 - 43372.916: 99.6970% ( 4) 00:13:43.433 43372.916 - 43611.229: 99.7538% ( 6) 00:13:43.433 43611.229 - 43849.542: 99.7917% ( 4) 00:13:43.433 43849.542 - 44087.855: 99.8580% ( 7) 00:13:43.433 44087.855 - 44326.167: 99.9053% ( 5) 00:13:43.433 44326.167 - 44564.480: 99.9621% ( 6) 00:13:43.433 44564.480 - 44802.793: 100.0000% ( 4) 00:13:43.433 00:13:43.433 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:43.433 ============================================================================== 00:13:43.433 Range in us Cumulative IO count 00:13:43.433 9830.400 - 9889.978: 0.0095% ( 1) 00:13:43.433 9949.556 - 10009.135: 0.0379% ( 3) 00:13:43.433 10009.135 - 10068.713: 0.0758% ( 4) 00:13:43.433 10068.713 - 10128.291: 0.1989% ( 13) 00:13:43.433 10128.291 - 10187.869: 0.3504% ( 16) 00:13:43.433 10187.869 - 10247.447: 0.5587% ( 22) 00:13:43.433 10247.447 - 10307.025: 0.7955% ( 25) 00:13:43.433 10307.025 - 10366.604: 1.1080% ( 33) 00:13:43.433 10366.604 - 10426.182: 1.5246% ( 44) 00:13:43.433 10426.182 - 10485.760: 2.1117% ( 62) 00:13:43.433 10485.760 - 10545.338: 3.0303% ( 97) 00:13:43.433 10545.338 - 10604.916: 3.6837% ( 69) 00:13:43.433 10604.916 - 10664.495: 4.4034% ( 76) 00:13:43.433 10664.495 - 10724.073: 5.3030% ( 95) 00:13:43.433 10724.073 - 10783.651: 6.4583% ( 122) 00:13:43.433 10783.651 - 10843.229: 7.6515% ( 126) 00:13:43.433 10843.229 - 10902.807: 9.6307% ( 209) 00:13:43.433 10902.807 - 10962.385: 11.7235% ( 221) 00:13:43.433 10962.385 - 11021.964: 13.8920% ( 229) 00:13:43.433 11021.964 - 11081.542: 16.7424% ( 301) 00:13:43.433 11081.542 - 11141.120: 19.6307% ( 305) 00:13:43.433 11141.120 - 11200.698: 22.6231% ( 316) 00:13:43.433 11200.698 - 11260.276: 25.6913% ( 324) 00:13:43.433 11260.276 - 11319.855: 29.1004% ( 360) 00:13:43.433 11319.855 - 11379.433: 32.8125% ( 392) 00:13:43.433 11379.433 - 11439.011: 35.9375% ( 330) 00:13:43.433 11439.011 - 11498.589: 39.7159% ( 399) 00:13:43.433 11498.589 - 11558.167: 43.6837% ( 419) 00:13:43.433 11558.167 - 11617.745: 47.7178% ( 426) 00:13:43.433 11617.745 - 11677.324: 51.6667% ( 417) 00:13:43.433 11677.324 - 11736.902: 55.3504% ( 389) 00:13:43.433 11736.902 - 11796.480: 58.7027% ( 354) 00:13:43.433 11796.480 - 11856.058: 61.7803% ( 325) 00:13:43.433 11856.058 - 11915.636: 64.8580% ( 325) 00:13:43.433 11915.636 - 11975.215: 67.5379% ( 283) 00:13:43.433 11975.215 - 12034.793: 70.3598% ( 298) 00:13:43.433 12034.793 - 12094.371: 72.4621% ( 222) 00:13:43.433 12094.371 - 12153.949: 74.3466% ( 199) 00:13:43.433 12153.949 - 12213.527: 76.3163% ( 208) 00:13:43.433 12213.527 - 12273.105: 77.7841% ( 155) 00:13:43.433 12273.105 - 12332.684: 79.5549% ( 187) 00:13:43.433 12332.684 - 12392.262: 81.0417% ( 157) 00:13:43.433 12392.262 - 12451.840: 81.9508% ( 96) 00:13:43.433 12451.840 - 12511.418: 82.7936% ( 89) 00:13:43.433 12511.418 - 12570.996: 83.7784% ( 104) 00:13:43.433 12570.996 - 12630.575: 84.5833% ( 85) 00:13:43.433 12630.575 - 12690.153: 85.4356% ( 90) 00:13:43.433 12690.153 - 12749.731: 86.2784% ( 89) 00:13:43.433 12749.731 - 12809.309: 87.0928% ( 86) 00:13:43.433 12809.309 - 12868.887: 87.7462% ( 69) 00:13:43.433 12868.887 - 12928.465: 88.3902% ( 68) 00:13:43.433 12928.465 - 12988.044: 88.9773% ( 62) 00:13:43.433 12988.044 - 13047.622: 89.4223% ( 47) 00:13:43.433 13047.622 - 13107.200: 89.7538% ( 35) 00:13:43.433 13107.200 - 13166.778: 90.0568% ( 32) 00:13:43.433 13166.778 - 13226.356: 90.3598% ( 32) 00:13:43.433 13226.356 - 13285.935: 90.5398% ( 19) 00:13:43.433 13285.935 - 13345.513: 90.8523% ( 33) 00:13:43.433 13345.513 - 13405.091: 91.2405% ( 41) 00:13:43.433 13405.091 - 13464.669: 91.4299% ( 20) 00:13:43.433 13464.669 - 13524.247: 91.6193% ( 20) 00:13:43.433 13524.247 - 13583.825: 91.7519% ( 14) 00:13:43.433 13583.825 - 13643.404: 91.9034% ( 16) 00:13:43.433 13643.404 - 13702.982: 92.0739% ( 18) 00:13:43.433 13702.982 - 13762.560: 92.3011% ( 24) 00:13:43.433 13762.560 - 13822.138: 92.4148% ( 12) 00:13:43.433 13822.138 - 13881.716: 92.5189% ( 11) 00:13:43.433 13881.716 - 13941.295: 92.5852% ( 7) 00:13:43.433 13941.295 - 14000.873: 92.6515% ( 7) 00:13:43.433 14000.873 - 14060.451: 92.7367% ( 9) 00:13:43.433 14060.451 - 14120.029: 92.8409% ( 11) 00:13:43.433 14120.029 - 14179.607: 92.9924% ( 16) 00:13:43.433 14179.607 - 14239.185: 93.2197% ( 24) 00:13:43.433 14239.185 - 14298.764: 93.4375% ( 23) 00:13:43.433 14298.764 - 14358.342: 93.6458% ( 22) 00:13:43.433 14358.342 - 14417.920: 93.9299% ( 30) 00:13:43.433 14417.920 - 14477.498: 94.1004% ( 18) 00:13:43.433 14477.498 - 14537.076: 94.2803% ( 19) 00:13:43.433 14537.076 - 14596.655: 94.5265% ( 26) 00:13:43.433 14596.655 - 14656.233: 95.0189% ( 52) 00:13:43.433 14656.233 - 14715.811: 95.4356% ( 44) 00:13:43.433 14715.811 - 14775.389: 95.7860% ( 37) 00:13:43.433 14775.389 - 14834.967: 96.1837% ( 42) 00:13:43.433 14834.967 - 14894.545: 96.7519% ( 60) 00:13:43.433 14894.545 - 14954.124: 97.2159% ( 49) 00:13:43.433 14954.124 - 15013.702: 97.5095% ( 31) 00:13:43.433 15013.702 - 15073.280: 97.7273% ( 23) 00:13:43.433 15073.280 - 15132.858: 97.9167% ( 20) 00:13:43.433 15132.858 - 15192.436: 98.0587% ( 15) 00:13:43.433 15192.436 - 15252.015: 98.2008% ( 15) 00:13:43.433 15252.015 - 15371.171: 98.4280% ( 24) 00:13:43.433 15371.171 - 15490.327: 98.5890% ( 17) 00:13:43.433 15490.327 - 15609.484: 98.7027% ( 12) 00:13:43.433 15609.484 - 15728.640: 98.7689% ( 7) 00:13:43.433 15728.640 - 15847.796: 98.7879% ( 2) 00:13:43.433 31218.967 - 31457.280: 98.8352% ( 5) 00:13:43.433 31457.280 - 31695.593: 98.8920% ( 6) 00:13:43.433 31695.593 - 31933.905: 98.9489% ( 6) 00:13:43.433 31933.905 - 32172.218: 99.0057% ( 6) 00:13:43.433 32172.218 - 32410.531: 99.0720% ( 7) 00:13:43.433 32410.531 - 32648.844: 99.1288% ( 6) 00:13:43.433 32648.844 - 32887.156: 99.1856% ( 6) 00:13:43.433 32887.156 - 33125.469: 99.2330% ( 5) 00:13:43.433 33125.469 - 33363.782: 99.2992% ( 7) 00:13:43.433 33363.782 - 33602.095: 99.3561% ( 6) 00:13:43.433 33602.095 - 33840.407: 99.3939% ( 4) 00:13:43.433 39321.600 - 39559.913: 99.4508% ( 6) 00:13:43.433 39559.913 - 39798.225: 99.5170% ( 7) 00:13:43.433 39798.225 - 40036.538: 99.5549% ( 4) 00:13:43.433 40036.538 - 40274.851: 99.6117% ( 6) 00:13:43.433 40274.851 - 40513.164: 99.6686% ( 6) 00:13:43.433 40513.164 - 40751.476: 99.7254% ( 6) 00:13:43.433 40751.476 - 40989.789: 99.7822% ( 6) 00:13:43.433 40989.789 - 41228.102: 99.8485% ( 7) 00:13:43.433 41228.102 - 41466.415: 99.9053% ( 6) 00:13:43.433 41466.415 - 41704.727: 99.9527% ( 5) 00:13:43.433 41704.727 - 41943.040: 100.0000% ( 5) 00:13:43.433 00:13:43.433 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:43.433 ============================================================================== 00:13:43.433 Range in us Cumulative IO count 00:13:43.433 10009.135 - 10068.713: 0.0284% ( 3) 00:13:43.433 10068.713 - 10128.291: 0.1515% ( 13) 00:13:43.433 10128.291 - 10187.869: 0.3314% ( 19) 00:13:43.433 10187.869 - 10247.447: 0.5492% ( 23) 00:13:43.433 10247.447 - 10307.025: 1.0038% ( 48) 00:13:43.433 10307.025 - 10366.604: 1.5152% ( 54) 00:13:43.433 10366.604 - 10426.182: 2.0455% ( 56) 00:13:43.433 10426.182 - 10485.760: 2.4811% ( 46) 00:13:43.433 10485.760 - 10545.338: 3.2576% ( 82) 00:13:43.433 10545.338 - 10604.916: 4.1383% ( 93) 00:13:43.433 10604.916 - 10664.495: 5.0947% ( 101) 00:13:43.433 10664.495 - 10724.073: 5.9280% ( 88) 00:13:43.433 10724.073 - 10783.651: 7.0455% ( 118) 00:13:43.433 10783.651 - 10843.229: 8.1155% ( 113) 00:13:43.433 10843.229 - 10902.807: 9.1951% ( 114) 00:13:43.433 10902.807 - 10962.385: 11.2027% ( 212) 00:13:43.433 10962.385 - 11021.964: 13.2008% ( 211) 00:13:43.433 11021.964 - 11081.542: 15.3030% ( 222) 00:13:43.433 11081.542 - 11141.120: 17.8788% ( 272) 00:13:43.433 11141.120 - 11200.698: 21.0701% ( 337) 00:13:43.433 11200.698 - 11260.276: 24.6402% ( 377) 00:13:43.433 11260.276 - 11319.855: 28.4754% ( 405) 00:13:43.433 11319.855 - 11379.433: 31.9129% ( 363) 00:13:43.433 11379.433 - 11439.011: 36.1269% ( 445) 00:13:43.433 11439.011 - 11498.589: 40.1705% ( 427) 00:13:43.433 11498.589 - 11558.167: 43.5985% ( 362) 00:13:43.433 11558.167 - 11617.745: 47.4527% ( 407) 00:13:43.433 11617.745 - 11677.324: 50.6818% ( 341) 00:13:43.433 11677.324 - 11736.902: 53.9110% ( 341) 00:13:43.433 11736.902 - 11796.480: 57.8693% ( 418) 00:13:43.433 11796.480 - 11856.058: 61.0322% ( 334) 00:13:43.433 11856.058 - 11915.636: 64.1572% ( 330) 00:13:43.433 11915.636 - 11975.215: 66.9886% ( 299) 00:13:43.433 11975.215 - 12034.793: 69.7254% ( 289) 00:13:43.433 12034.793 - 12094.371: 72.4905% ( 292) 00:13:43.434 12094.371 - 12153.949: 74.9716% ( 262) 00:13:43.434 12153.949 - 12213.527: 77.0170% ( 216) 00:13:43.434 12213.527 - 12273.105: 79.1667% ( 227) 00:13:43.434 12273.105 - 12332.684: 80.4735% ( 138) 00:13:43.434 12332.684 - 12392.262: 81.7140% ( 131) 00:13:43.434 12392.262 - 12451.840: 82.9356% ( 129) 00:13:43.434 12451.840 - 12511.418: 83.8920% ( 101) 00:13:43.434 12511.418 - 12570.996: 84.8011% ( 96) 00:13:43.434 12570.996 - 12630.575: 85.7765% ( 103) 00:13:43.434 12630.575 - 12690.153: 86.6572% ( 93) 00:13:43.434 12690.153 - 12749.731: 87.5473% ( 94) 00:13:43.434 12749.731 - 12809.309: 88.0871% ( 57) 00:13:43.434 12809.309 - 12868.887: 88.5606% ( 50) 00:13:43.434 12868.887 - 12928.465: 89.3182% ( 80) 00:13:43.434 12928.465 - 12988.044: 89.7727% ( 48) 00:13:43.434 12988.044 - 13047.622: 90.0568% ( 30) 00:13:43.434 13047.622 - 13107.200: 90.2746% ( 23) 00:13:43.434 13107.200 - 13166.778: 90.4830% ( 22) 00:13:43.434 13166.778 - 13226.356: 90.6629% ( 19) 00:13:43.434 13226.356 - 13285.935: 90.8333% ( 18) 00:13:43.434 13285.935 - 13345.513: 91.0322% ( 21) 00:13:43.434 13345.513 - 13405.091: 91.2784% ( 26) 00:13:43.434 13405.091 - 13464.669: 91.4962% ( 23) 00:13:43.434 13464.669 - 13524.247: 91.6383% ( 15) 00:13:43.434 13524.247 - 13583.825: 92.0360% ( 42) 00:13:43.434 13583.825 - 13643.404: 92.2633% ( 24) 00:13:43.434 13643.404 - 13702.982: 92.3864% ( 13) 00:13:43.434 13702.982 - 13762.560: 92.5095% ( 13) 00:13:43.434 13762.560 - 13822.138: 92.6042% ( 10) 00:13:43.434 13822.138 - 13881.716: 92.6799% ( 8) 00:13:43.434 13881.716 - 13941.295: 92.7273% ( 5) 00:13:43.434 13941.295 - 14000.873: 92.7462% ( 2) 00:13:43.434 14000.873 - 14060.451: 92.8598% ( 12) 00:13:43.434 14060.451 - 14120.029: 92.9735% ( 12) 00:13:43.434 14120.029 - 14179.607: 93.1818% ( 22) 00:13:43.434 14179.607 - 14239.185: 93.4848% ( 32) 00:13:43.434 14239.185 - 14298.764: 93.9489% ( 49) 00:13:43.434 14298.764 - 14358.342: 94.2424% ( 31) 00:13:43.434 14358.342 - 14417.920: 94.4318% ( 20) 00:13:43.434 14417.920 - 14477.498: 94.6023% ( 18) 00:13:43.434 14477.498 - 14537.076: 94.8958% ( 31) 00:13:43.434 14537.076 - 14596.655: 95.0663% ( 18) 00:13:43.434 14596.655 - 14656.233: 95.2652% ( 21) 00:13:43.434 14656.233 - 14715.811: 95.5019% ( 25) 00:13:43.434 14715.811 - 14775.389: 95.8049% ( 32) 00:13:43.434 14775.389 - 14834.967: 96.1269% ( 34) 00:13:43.434 14834.967 - 14894.545: 96.4205% ( 31) 00:13:43.434 14894.545 - 14954.124: 96.7045% ( 30) 00:13:43.434 14954.124 - 15013.702: 96.9981% ( 31) 00:13:43.434 15013.702 - 15073.280: 97.3390% ( 36) 00:13:43.434 15073.280 - 15132.858: 97.5000% ( 17) 00:13:43.434 15132.858 - 15192.436: 97.6799% ( 19) 00:13:43.434 15192.436 - 15252.015: 97.8788% ( 21) 00:13:43.434 15252.015 - 15371.171: 98.1723% ( 31) 00:13:43.434 15371.171 - 15490.327: 98.3996% ( 24) 00:13:43.434 15490.327 - 15609.484: 98.5322% ( 14) 00:13:43.434 15609.484 - 15728.640: 98.6174% ( 9) 00:13:43.434 15728.640 - 15847.796: 98.6837% ( 7) 00:13:43.434 15847.796 - 15966.953: 98.7595% ( 8) 00:13:43.434 15966.953 - 16086.109: 98.7879% ( 3) 00:13:43.434 29431.622 - 29550.778: 98.8068% ( 2) 00:13:43.434 29550.778 - 29669.935: 98.8352% ( 3) 00:13:43.434 29669.935 - 29789.091: 98.8636% ( 3) 00:13:43.434 29789.091 - 29908.247: 98.8920% ( 3) 00:13:43.434 29908.247 - 30027.404: 98.9205% ( 3) 00:13:43.434 30027.404 - 30146.560: 98.9489% ( 3) 00:13:43.434 30146.560 - 30265.716: 98.9773% ( 3) 00:13:43.434 30265.716 - 30384.873: 99.0057% ( 3) 00:13:43.434 30384.873 - 30504.029: 99.0341% ( 3) 00:13:43.434 30504.029 - 30742.342: 99.0909% ( 6) 00:13:43.434 30742.342 - 30980.655: 99.1477% ( 6) 00:13:43.434 30980.655 - 31218.967: 99.2045% ( 6) 00:13:43.434 31218.967 - 31457.280: 99.2614% ( 6) 00:13:43.434 31457.280 - 31695.593: 99.3087% ( 5) 00:13:43.434 31695.593 - 31933.905: 99.3655% ( 6) 00:13:43.434 31933.905 - 32172.218: 99.3939% ( 3) 00:13:43.434 37415.098 - 37653.411: 99.4318% ( 4) 00:13:43.434 37653.411 - 37891.724: 99.4886% ( 6) 00:13:43.434 37891.724 - 38130.036: 99.5455% ( 6) 00:13:43.434 38130.036 - 38368.349: 99.6023% ( 6) 00:13:43.434 38368.349 - 38606.662: 99.6591% ( 6) 00:13:43.434 38606.662 - 38844.975: 99.7254% ( 7) 00:13:43.434 38844.975 - 39083.287: 99.7727% ( 5) 00:13:43.434 39083.287 - 39321.600: 99.8295% ( 6) 00:13:43.434 39321.600 - 39559.913: 99.8958% ( 7) 00:13:43.434 39559.913 - 39798.225: 99.9527% ( 6) 00:13:43.434 39798.225 - 40036.538: 100.0000% ( 5) 00:13:43.434 00:13:43.434 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:43.434 ============================================================================== 00:13:43.434 Range in us Cumulative IO count 00:13:43.434 9830.400 - 9889.978: 0.0095% ( 1) 00:13:43.434 9889.978 - 9949.556: 0.0189% ( 1) 00:13:43.434 9949.556 - 10009.135: 0.0379% ( 2) 00:13:43.434 10009.135 - 10068.713: 0.1610% ( 13) 00:13:43.434 10068.713 - 10128.291: 0.3125% ( 16) 00:13:43.434 10128.291 - 10187.869: 0.5966% ( 30) 00:13:43.434 10187.869 - 10247.447: 0.9659% ( 39) 00:13:43.434 10247.447 - 10307.025: 1.2784% ( 33) 00:13:43.434 10307.025 - 10366.604: 1.6004% ( 34) 00:13:43.434 10366.604 - 10426.182: 2.0833% ( 51) 00:13:43.434 10426.182 - 10485.760: 2.5663% ( 51) 00:13:43.434 10485.760 - 10545.338: 3.0682% ( 53) 00:13:43.434 10545.338 - 10604.916: 3.4470% ( 40) 00:13:43.434 10604.916 - 10664.495: 4.1004% ( 69) 00:13:43.434 10664.495 - 10724.073: 4.7064% ( 64) 00:13:43.434 10724.073 - 10783.651: 5.5777% ( 92) 00:13:43.434 10783.651 - 10843.229: 6.8845% ( 138) 00:13:43.434 10843.229 - 10902.807: 8.1345% ( 132) 00:13:43.434 10902.807 - 10962.385: 10.1420% ( 212) 00:13:43.434 10962.385 - 11021.964: 12.2254% ( 220) 00:13:43.434 11021.964 - 11081.542: 14.2614% ( 215) 00:13:43.434 11081.542 - 11141.120: 17.4716% ( 339) 00:13:43.434 11141.120 - 11200.698: 20.7008% ( 341) 00:13:43.434 11200.698 - 11260.276: 24.4129% ( 392) 00:13:43.434 11260.276 - 11319.855: 28.1345% ( 393) 00:13:43.434 11319.855 - 11379.433: 31.7803% ( 385) 00:13:43.434 11379.433 - 11439.011: 35.4830% ( 391) 00:13:43.434 11439.011 - 11498.589: 40.2273% ( 501) 00:13:43.434 11498.589 - 11558.167: 43.8636% ( 384) 00:13:43.434 11558.167 - 11617.745: 47.3864% ( 372) 00:13:43.434 11617.745 - 11677.324: 50.4830% ( 327) 00:13:43.434 11677.324 - 11736.902: 53.8163% ( 352) 00:13:43.434 11736.902 - 11796.480: 57.0549% ( 342) 00:13:43.434 11796.480 - 11856.058: 60.5019% ( 364) 00:13:43.434 11856.058 - 11915.636: 64.4223% ( 414) 00:13:43.434 11915.636 - 11975.215: 68.1155% ( 390) 00:13:43.434 11975.215 - 12034.793: 70.8144% ( 285) 00:13:43.434 12034.793 - 12094.371: 72.9735% ( 228) 00:13:43.434 12094.371 - 12153.949: 75.2462% ( 240) 00:13:43.434 12153.949 - 12213.527: 77.1780% ( 204) 00:13:43.434 12213.527 - 12273.105: 79.1193% ( 205) 00:13:43.434 12273.105 - 12332.684: 80.7008% ( 167) 00:13:43.434 12332.684 - 12392.262: 82.0549% ( 143) 00:13:43.434 12392.262 - 12451.840: 83.1913% ( 120) 00:13:43.434 12451.840 - 12511.418: 84.0909% ( 95) 00:13:43.434 12511.418 - 12570.996: 85.2367% ( 121) 00:13:43.434 12570.996 - 12630.575: 86.0511% ( 86) 00:13:43.434 12630.575 - 12690.153: 86.7708% ( 76) 00:13:43.434 12690.153 - 12749.731: 87.7367% ( 102) 00:13:43.434 12749.731 - 12809.309: 88.4943% ( 80) 00:13:43.434 12809.309 - 12868.887: 88.9962% ( 53) 00:13:43.434 12868.887 - 12928.465: 89.5360% ( 57) 00:13:43.434 12928.465 - 12988.044: 89.8769% ( 36) 00:13:43.434 12988.044 - 13047.622: 90.1799% ( 32) 00:13:43.434 13047.622 - 13107.200: 90.4261% ( 26) 00:13:43.434 13107.200 - 13166.778: 90.6250% ( 21) 00:13:43.434 13166.778 - 13226.356: 90.7860% ( 17) 00:13:43.434 13226.356 - 13285.935: 90.8712% ( 9) 00:13:43.434 13285.935 - 13345.513: 90.9470% ( 8) 00:13:43.434 13345.513 - 13405.091: 91.1553% ( 22) 00:13:43.434 13405.091 - 13464.669: 91.2500% ( 10) 00:13:43.434 13464.669 - 13524.247: 91.3258% ( 8) 00:13:43.434 13524.247 - 13583.825: 91.4489% ( 13) 00:13:43.434 13583.825 - 13643.404: 91.6288% ( 19) 00:13:43.434 13643.404 - 13702.982: 91.8561% ( 24) 00:13:43.434 13702.982 - 13762.560: 92.1307% ( 29) 00:13:43.434 13762.560 - 13822.138: 92.3485% ( 23) 00:13:43.434 13822.138 - 13881.716: 92.6326% ( 30) 00:13:43.434 13881.716 - 13941.295: 93.1061% ( 50) 00:13:43.434 13941.295 - 14000.873: 93.2670% ( 17) 00:13:43.434 14000.873 - 14060.451: 93.4659% ( 21) 00:13:43.434 14060.451 - 14120.029: 93.6269% ( 17) 00:13:43.434 14120.029 - 14179.607: 93.7595% ( 14) 00:13:43.434 14179.607 - 14239.185: 93.8920% ( 14) 00:13:43.434 14239.185 - 14298.764: 94.0246% ( 14) 00:13:43.434 14298.764 - 14358.342: 94.1667% ( 15) 00:13:43.434 14358.342 - 14417.920: 94.3561% ( 20) 00:13:43.434 14417.920 - 14477.498: 94.4981% ( 15) 00:13:43.434 14477.498 - 14537.076: 94.5833% ( 9) 00:13:43.434 14537.076 - 14596.655: 94.6875% ( 11) 00:13:43.434 14596.655 - 14656.233: 94.8674% ( 19) 00:13:43.435 14656.233 - 14715.811: 95.1420% ( 29) 00:13:43.435 14715.811 - 14775.389: 95.4356% ( 31) 00:13:43.435 14775.389 - 14834.967: 95.7386% ( 32) 00:13:43.435 14834.967 - 14894.545: 96.0890% ( 37) 00:13:43.435 14894.545 - 14954.124: 96.4015% ( 33) 00:13:43.435 14954.124 - 15013.702: 96.7803% ( 40) 00:13:43.435 15013.702 - 15073.280: 97.0265% ( 26) 00:13:43.435 15073.280 - 15132.858: 97.2727% ( 26) 00:13:43.435 15132.858 - 15192.436: 97.5095% ( 25) 00:13:43.435 15192.436 - 15252.015: 97.6894% ( 19) 00:13:43.435 15252.015 - 15371.171: 98.0682% ( 40) 00:13:43.435 15371.171 - 15490.327: 98.4186% ( 37) 00:13:43.435 15490.327 - 15609.484: 98.6269% ( 22) 00:13:43.435 15609.484 - 15728.640: 98.7689% ( 15) 00:13:43.435 15728.640 - 15847.796: 98.7879% ( 2) 00:13:43.435 27405.964 - 27525.120: 98.8068% ( 2) 00:13:43.435 27525.120 - 27644.276: 98.8826% ( 8) 00:13:43.435 27644.276 - 27763.433: 98.9489% ( 7) 00:13:43.435 27763.433 - 27882.589: 99.0057% ( 6) 00:13:43.435 27882.589 - 28001.745: 99.0341% ( 3) 00:13:43.435 28001.745 - 28120.902: 99.0625% ( 3) 00:13:43.435 28120.902 - 28240.058: 99.0814% ( 2) 00:13:43.435 28240.058 - 28359.215: 99.1098% ( 3) 00:13:43.435 28359.215 - 28478.371: 99.1383% ( 3) 00:13:43.435 28478.371 - 28597.527: 99.1572% ( 2) 00:13:43.435 28597.527 - 28716.684: 99.1761% ( 2) 00:13:43.435 28716.684 - 28835.840: 99.2045% ( 3) 00:13:43.435 28835.840 - 28954.996: 99.2330% ( 3) 00:13:43.435 28954.996 - 29074.153: 99.2614% ( 3) 00:13:43.435 29074.153 - 29193.309: 99.2803% ( 2) 00:13:43.435 29193.309 - 29312.465: 99.3087% ( 3) 00:13:43.435 29312.465 - 29431.622: 99.3371% ( 3) 00:13:43.435 29431.622 - 29550.778: 99.3561% ( 2) 00:13:43.435 29550.778 - 29669.935: 99.3845% ( 3) 00:13:43.435 29669.935 - 29789.091: 99.3939% ( 1) 00:13:43.435 33363.782 - 33602.095: 99.4413% ( 5) 00:13:43.435 33602.095 - 33840.407: 99.5360% ( 10) 00:13:43.435 35270.284 - 35508.596: 99.5739% ( 4) 00:13:43.435 35508.596 - 35746.909: 99.6307% ( 6) 00:13:43.435 35746.909 - 35985.222: 99.6875% ( 6) 00:13:43.435 35985.222 - 36223.535: 99.7443% ( 6) 00:13:43.435 36223.535 - 36461.847: 99.7917% ( 5) 00:13:43.435 36461.847 - 36700.160: 99.8485% ( 6) 00:13:43.435 36700.160 - 36938.473: 99.8958% ( 5) 00:13:43.435 36938.473 - 37176.785: 99.9432% ( 5) 00:13:43.435 37176.785 - 37415.098: 99.9905% ( 5) 00:13:43.435 37415.098 - 37653.411: 100.0000% ( 1) 00:13:43.435 00:13:43.435 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:43.435 ============================================================================== 00:13:43.435 Range in us Cumulative IO count 00:13:43.435 9770.822 - 9830.400: 0.0095% ( 1) 00:13:43.435 9889.978 - 9949.556: 0.0189% ( 1) 00:13:43.435 9949.556 - 10009.135: 0.0284% ( 1) 00:13:43.435 10009.135 - 10068.713: 0.1136% ( 9) 00:13:43.435 10068.713 - 10128.291: 0.2367% ( 13) 00:13:43.435 10128.291 - 10187.869: 0.4356% ( 21) 00:13:43.435 10187.869 - 10247.447: 0.8996% ( 49) 00:13:43.435 10247.447 - 10307.025: 1.2121% ( 33) 00:13:43.435 10307.025 - 10366.604: 2.0644% ( 90) 00:13:43.435 10366.604 - 10426.182: 2.4527% ( 41) 00:13:43.435 10426.182 - 10485.760: 2.9167% ( 49) 00:13:43.435 10485.760 - 10545.338: 3.3144% ( 42) 00:13:43.435 10545.338 - 10604.916: 3.7973% ( 51) 00:13:43.435 10604.916 - 10664.495: 4.3371% ( 57) 00:13:43.435 10664.495 - 10724.073: 5.0568% ( 76) 00:13:43.435 10724.073 - 10783.651: 5.9754% ( 97) 00:13:43.435 10783.651 - 10843.229: 7.0455% ( 113) 00:13:43.435 10843.229 - 10902.807: 8.5511% ( 159) 00:13:43.435 10902.807 - 10962.385: 10.0852% ( 162) 00:13:43.435 10962.385 - 11021.964: 12.4148% ( 246) 00:13:43.435 11021.964 - 11081.542: 14.6307% ( 234) 00:13:43.435 11081.542 - 11141.120: 17.1686% ( 268) 00:13:43.435 11141.120 - 11200.698: 20.0852% ( 308) 00:13:43.435 11200.698 - 11260.276: 24.0720% ( 421) 00:13:43.435 11260.276 - 11319.855: 28.5606% ( 474) 00:13:43.435 11319.855 - 11379.433: 32.8977% ( 458) 00:13:43.435 11379.433 - 11439.011: 37.0644% ( 440) 00:13:43.435 11439.011 - 11498.589: 40.7670% ( 391) 00:13:43.435 11498.589 - 11558.167: 44.2898% ( 372) 00:13:43.435 11558.167 - 11617.745: 48.3996% ( 434) 00:13:43.435 11617.745 - 11677.324: 52.1970% ( 401) 00:13:43.435 11677.324 - 11736.902: 55.6061% ( 360) 00:13:43.435 11736.902 - 11796.480: 58.6553% ( 322) 00:13:43.435 11796.480 - 11856.058: 61.0890% ( 257) 00:13:43.435 11856.058 - 11915.636: 64.0152% ( 309) 00:13:43.435 11915.636 - 11975.215: 66.8939% ( 304) 00:13:43.435 11975.215 - 12034.793: 70.0095% ( 329) 00:13:43.435 12034.793 - 12094.371: 72.4148% ( 254) 00:13:43.435 12094.371 - 12153.949: 75.0379% ( 277) 00:13:43.435 12153.949 - 12213.527: 77.1117% ( 219) 00:13:43.435 12213.527 - 12273.105: 79.0530% ( 205) 00:13:43.435 12273.105 - 12332.684: 80.2746% ( 129) 00:13:43.435 12332.684 - 12392.262: 81.3163% ( 110) 00:13:43.435 12392.262 - 12451.840: 82.6326% ( 139) 00:13:43.435 12451.840 - 12511.418: 83.5890% ( 101) 00:13:43.435 12511.418 - 12570.996: 84.6591% ( 113) 00:13:43.435 12570.996 - 12630.575: 85.5682% ( 96) 00:13:43.435 12630.575 - 12690.153: 86.2311% ( 70) 00:13:43.435 12690.153 - 12749.731: 86.8561% ( 66) 00:13:43.435 12749.731 - 12809.309: 87.5379% ( 72) 00:13:43.435 12809.309 - 12868.887: 88.0966% ( 59) 00:13:43.435 12868.887 - 12928.465: 88.8163% ( 76) 00:13:43.435 12928.465 - 12988.044: 89.3182% ( 53) 00:13:43.435 12988.044 - 13047.622: 89.6307% ( 33) 00:13:43.435 13047.622 - 13107.200: 90.0189% ( 41) 00:13:43.435 13107.200 - 13166.778: 90.2652% ( 26) 00:13:43.435 13166.778 - 13226.356: 90.5208% ( 27) 00:13:43.435 13226.356 - 13285.935: 90.7102% ( 20) 00:13:43.435 13285.935 - 13345.513: 90.8333% ( 13) 00:13:43.435 13345.513 - 13405.091: 90.9470% ( 12) 00:13:43.435 13405.091 - 13464.669: 91.0701% ( 13) 00:13:43.435 13464.669 - 13524.247: 91.1932% ( 13) 00:13:43.435 13524.247 - 13583.825: 91.3163% ( 13) 00:13:43.435 13583.825 - 13643.404: 91.4489% ( 14) 00:13:43.435 13643.404 - 13702.982: 91.6667% ( 23) 00:13:43.435 13702.982 - 13762.560: 91.9508% ( 30) 00:13:43.435 13762.560 - 13822.138: 92.0928% ( 15) 00:13:43.435 13822.138 - 13881.716: 92.2348% ( 15) 00:13:43.435 13881.716 - 13941.295: 92.3674% ( 14) 00:13:43.435 13941.295 - 14000.873: 92.5189% ( 16) 00:13:43.435 14000.873 - 14060.451: 92.7367% ( 23) 00:13:43.435 14060.451 - 14120.029: 93.0398% ( 32) 00:13:43.435 14120.029 - 14179.607: 93.3428% ( 32) 00:13:43.435 14179.607 - 14239.185: 93.7027% ( 38) 00:13:43.435 14239.185 - 14298.764: 93.9583% ( 27) 00:13:43.435 14298.764 - 14358.342: 94.2519% ( 31) 00:13:43.435 14358.342 - 14417.920: 94.4602% ( 22) 00:13:43.435 14417.920 - 14477.498: 94.6686% ( 22) 00:13:43.435 14477.498 - 14537.076: 94.7538% ( 9) 00:13:43.435 14537.076 - 14596.655: 94.9527% ( 21) 00:13:43.435 14596.655 - 14656.233: 95.1705% ( 23) 00:13:43.435 14656.233 - 14715.811: 95.3693% ( 21) 00:13:43.435 14715.811 - 14775.389: 95.6439% ( 29) 00:13:43.435 14775.389 - 14834.967: 95.9186% ( 29) 00:13:43.435 14834.967 - 14894.545: 96.1458% ( 24) 00:13:43.435 14894.545 - 14954.124: 96.4110% ( 28) 00:13:43.435 14954.124 - 15013.702: 96.6856% ( 29) 00:13:43.435 15013.702 - 15073.280: 96.9413% ( 27) 00:13:43.435 15073.280 - 15132.858: 97.1780% ( 25) 00:13:43.435 15132.858 - 15192.436: 97.3390% ( 17) 00:13:43.435 15192.436 - 15252.015: 97.5000% ( 17) 00:13:43.435 15252.015 - 15371.171: 97.8504% ( 37) 00:13:43.435 15371.171 - 15490.327: 98.1061% ( 27) 00:13:43.435 15490.327 - 15609.484: 98.2765% ( 18) 00:13:43.435 15609.484 - 15728.640: 98.4659% ( 20) 00:13:43.435 15728.640 - 15847.796: 98.5985% ( 14) 00:13:43.435 15847.796 - 15966.953: 98.7595% ( 17) 00:13:43.435 15966.953 - 16086.109: 98.7879% ( 3) 00:13:43.435 24784.524 - 24903.680: 98.8731% ( 9) 00:13:43.435 24903.680 - 25022.836: 98.9773% ( 11) 00:13:43.435 25022.836 - 25141.993: 99.0152% ( 4) 00:13:43.435 25141.993 - 25261.149: 99.0341% ( 2) 00:13:43.435 25261.149 - 25380.305: 99.0625% ( 3) 00:13:43.435 25380.305 - 25499.462: 99.0814% ( 2) 00:13:43.435 25499.462 - 25618.618: 99.1098% ( 3) 00:13:43.435 25618.618 - 25737.775: 99.1383% ( 3) 00:13:43.435 25737.775 - 25856.931: 99.1667% ( 3) 00:13:43.435 25856.931 - 25976.087: 99.1856% ( 2) 00:13:43.435 25976.087 - 26095.244: 99.2140% ( 3) 00:13:43.435 26095.244 - 26214.400: 99.2424% ( 3) 00:13:43.435 26214.400 - 26333.556: 99.2708% ( 3) 00:13:43.435 26333.556 - 26452.713: 99.2992% ( 3) 00:13:43.435 26452.713 - 26571.869: 99.3182% ( 2) 00:13:43.435 26571.869 - 26691.025: 99.3466% ( 3) 00:13:43.435 26691.025 - 26810.182: 99.3750% ( 3) 00:13:43.435 26810.182 - 26929.338: 99.3939% ( 2) 00:13:43.435 30504.029 - 30742.342: 99.4034% ( 1) 00:13:43.435 30742.342 - 30980.655: 99.5076% ( 11) 00:13:43.435 32410.531 - 32648.844: 99.5170% ( 1) 00:13:43.435 32648.844 - 32887.156: 99.5644% ( 5) 00:13:43.435 32887.156 - 33125.469: 99.6212% ( 6) 00:13:43.435 33125.469 - 33363.782: 99.6780% ( 6) 00:13:43.435 33363.782 - 33602.095: 99.7254% ( 5) 00:13:43.435 33602.095 - 33840.407: 99.7727% ( 5) 00:13:43.435 33840.407 - 34078.720: 99.8295% ( 6) 00:13:43.435 34078.720 - 34317.033: 99.8864% ( 6) 00:13:43.436 34317.033 - 34555.345: 99.9432% ( 6) 00:13:43.436 34555.345 - 34793.658: 100.0000% ( 6) 00:13:43.436 00:13:43.436 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:43.436 ============================================================================== 00:13:43.436 Range in us Cumulative IO count 00:13:43.436 9830.400 - 9889.978: 0.0095% ( 1) 00:13:43.436 9889.978 - 9949.556: 0.0189% ( 1) 00:13:43.436 10009.135 - 10068.713: 0.0379% ( 2) 00:13:43.436 10068.713 - 10128.291: 0.0947% ( 6) 00:13:43.436 10128.291 - 10187.869: 0.1610% ( 7) 00:13:43.436 10187.869 - 10247.447: 0.2746% ( 12) 00:13:43.436 10247.447 - 10307.025: 0.5682% ( 31) 00:13:43.436 10307.025 - 10366.604: 0.9280% ( 38) 00:13:43.436 10366.604 - 10426.182: 1.2216% ( 31) 00:13:43.436 10426.182 - 10485.760: 1.8371% ( 65) 00:13:43.436 10485.760 - 10545.338: 2.5758% ( 78) 00:13:43.436 10545.338 - 10604.916: 3.3902% ( 86) 00:13:43.436 10604.916 - 10664.495: 4.2803% ( 94) 00:13:43.436 10664.495 - 10724.073: 5.0189% ( 78) 00:13:43.436 10724.073 - 10783.651: 5.9470% ( 98) 00:13:43.436 10783.651 - 10843.229: 6.8371% ( 94) 00:13:43.436 10843.229 - 10902.807: 8.1061% ( 134) 00:13:43.436 10902.807 - 10962.385: 9.5833% ( 156) 00:13:43.436 10962.385 - 11021.964: 11.8845% ( 243) 00:13:43.436 11021.964 - 11081.542: 14.4697% ( 273) 00:13:43.436 11081.542 - 11141.120: 17.3864% ( 308) 00:13:43.436 11141.120 - 11200.698: 20.7670% ( 357) 00:13:43.436 11200.698 - 11260.276: 24.2519% ( 368) 00:13:43.436 11260.276 - 11319.855: 28.4280% ( 441) 00:13:43.436 11319.855 - 11379.433: 31.6004% ( 335) 00:13:43.436 11379.433 - 11439.011: 35.5777% ( 420) 00:13:43.436 11439.011 - 11498.589: 40.7102% ( 542) 00:13:43.436 11498.589 - 11558.167: 44.8958% ( 442) 00:13:43.436 11558.167 - 11617.745: 48.5890% ( 390) 00:13:43.436 11617.745 - 11677.324: 53.0587% ( 472) 00:13:43.436 11677.324 - 11736.902: 56.4678% ( 360) 00:13:43.436 11736.902 - 11796.480: 60.3409% ( 409) 00:13:43.436 11796.480 - 11856.058: 63.0871% ( 290) 00:13:43.436 11856.058 - 11915.636: 65.7008% ( 276) 00:13:43.436 11915.636 - 11975.215: 68.2197% ( 266) 00:13:43.436 11975.215 - 12034.793: 70.7386% ( 266) 00:13:43.436 12034.793 - 12094.371: 73.1439% ( 254) 00:13:43.436 12094.371 - 12153.949: 75.3409% ( 232) 00:13:43.436 12153.949 - 12213.527: 77.0265% ( 178) 00:13:43.436 12213.527 - 12273.105: 78.5890% ( 165) 00:13:43.436 12273.105 - 12332.684: 79.7064% ( 118) 00:13:43.436 12332.684 - 12392.262: 81.1080% ( 148) 00:13:43.436 12392.262 - 12451.840: 81.9886% ( 93) 00:13:43.436 12451.840 - 12511.418: 82.6894% ( 74) 00:13:43.436 12511.418 - 12570.996: 83.4280% ( 78) 00:13:43.436 12570.996 - 12630.575: 84.2992% ( 92) 00:13:43.436 12630.575 - 12690.153: 85.2557% ( 101) 00:13:43.436 12690.153 - 12749.731: 86.4867% ( 130) 00:13:43.436 12749.731 - 12809.309: 87.9924% ( 159) 00:13:43.436 12809.309 - 12868.887: 88.7405% ( 79) 00:13:43.436 12868.887 - 12928.465: 89.4034% ( 70) 00:13:43.436 12928.465 - 12988.044: 89.9242% ( 55) 00:13:43.436 12988.044 - 13047.622: 90.4640% ( 57) 00:13:43.436 13047.622 - 13107.200: 90.7481% ( 30) 00:13:43.436 13107.200 - 13166.778: 91.0606% ( 33) 00:13:43.436 13166.778 - 13226.356: 91.2311% ( 18) 00:13:43.436 13226.356 - 13285.935: 91.3352% ( 11) 00:13:43.436 13285.935 - 13345.513: 91.4299% ( 10) 00:13:43.436 13345.513 - 13405.091: 91.5152% ( 9) 00:13:43.436 13405.091 - 13464.669: 91.5909% ( 8) 00:13:43.436 13464.669 - 13524.247: 91.6856% ( 10) 00:13:43.436 13524.247 - 13583.825: 91.8182% ( 14) 00:13:43.436 13583.825 - 13643.404: 91.9129% ( 10) 00:13:43.436 13643.404 - 13702.982: 91.9886% ( 8) 00:13:43.436 13702.982 - 13762.560: 92.0739% ( 9) 00:13:43.436 13762.560 - 13822.138: 92.1780% ( 11) 00:13:43.436 13822.138 - 13881.716: 92.3106% ( 14) 00:13:43.436 13881.716 - 13941.295: 92.5379% ( 24) 00:13:43.436 13941.295 - 14000.873: 92.6799% ( 15) 00:13:43.436 14000.873 - 14060.451: 92.9356% ( 27) 00:13:43.436 14060.451 - 14120.029: 93.0682% ( 14) 00:13:43.436 14120.029 - 14179.607: 93.1439% ( 8) 00:13:43.436 14179.607 - 14239.185: 93.2197% ( 8) 00:13:43.436 14239.185 - 14298.764: 93.3333% ( 12) 00:13:43.436 14298.764 - 14358.342: 93.4754% ( 15) 00:13:43.436 14358.342 - 14417.920: 93.6364% ( 17) 00:13:43.436 14417.920 - 14477.498: 93.8447% ( 22) 00:13:43.436 14477.498 - 14537.076: 94.1856% ( 36) 00:13:43.436 14537.076 - 14596.655: 94.4602% ( 29) 00:13:43.436 14596.655 - 14656.233: 94.6402% ( 19) 00:13:43.436 14656.233 - 14715.811: 94.8485% ( 22) 00:13:43.436 14715.811 - 14775.389: 95.1231% ( 29) 00:13:43.436 14775.389 - 14834.967: 95.4640% ( 36) 00:13:43.436 14834.967 - 14894.545: 95.7955% ( 35) 00:13:43.436 14894.545 - 14954.124: 96.1837% ( 41) 00:13:43.436 14954.124 - 15013.702: 96.5057% ( 34) 00:13:43.436 15013.702 - 15073.280: 96.9034% ( 42) 00:13:43.436 15073.280 - 15132.858: 97.1875% ( 30) 00:13:43.436 15132.858 - 15192.436: 97.3958% ( 22) 00:13:43.436 15192.436 - 15252.015: 97.5568% ( 17) 00:13:43.436 15252.015 - 15371.171: 97.8598% ( 32) 00:13:43.436 15371.171 - 15490.327: 97.9830% ( 13) 00:13:43.436 15490.327 - 15609.484: 98.1061% ( 13) 00:13:43.436 15609.484 - 15728.640: 98.1818% ( 8) 00:13:43.436 15728.640 - 15847.796: 98.2670% ( 9) 00:13:43.436 15847.796 - 15966.953: 98.3239% ( 6) 00:13:43.436 15966.953 - 16086.109: 98.3807% ( 6) 00:13:43.436 16086.109 - 16205.265: 98.4186% ( 4) 00:13:43.436 16205.265 - 16324.422: 98.4848% ( 7) 00:13:43.436 16324.422 - 16443.578: 98.6742% ( 20) 00:13:43.436 16443.578 - 16562.735: 98.7500% ( 8) 00:13:43.436 16562.735 - 16681.891: 98.7784% ( 3) 00:13:43.436 16681.891 - 16801.047: 98.7879% ( 1) 00:13:43.436 21686.458 - 21805.615: 98.8068% ( 2) 00:13:43.436 21805.615 - 21924.771: 98.8542% ( 5) 00:13:43.436 21924.771 - 22043.927: 98.9205% ( 7) 00:13:43.436 22043.927 - 22163.084: 99.0057% ( 9) 00:13:43.436 22163.084 - 22282.240: 99.0246% ( 2) 00:13:43.436 22282.240 - 22401.396: 99.0530% ( 3) 00:13:43.436 22401.396 - 22520.553: 99.0720% ( 2) 00:13:43.436 22520.553 - 22639.709: 99.1004% ( 3) 00:13:43.436 22639.709 - 22758.865: 99.1288% ( 3) 00:13:43.436 22758.865 - 22878.022: 99.1572% ( 3) 00:13:43.436 22878.022 - 22997.178: 99.1856% ( 3) 00:13:43.436 22997.178 - 23116.335: 99.2045% ( 2) 00:13:43.436 23116.335 - 23235.491: 99.2330% ( 3) 00:13:43.436 23235.491 - 23354.647: 99.2519% ( 2) 00:13:43.436 23354.647 - 23473.804: 99.2803% ( 3) 00:13:43.436 23473.804 - 23592.960: 99.3087% ( 3) 00:13:43.436 23592.960 - 23712.116: 99.3277% ( 2) 00:13:43.436 23712.116 - 23831.273: 99.3561% ( 3) 00:13:43.436 23831.273 - 23950.429: 99.3845% ( 3) 00:13:43.436 23950.429 - 24069.585: 99.3939% ( 1) 00:13:43.436 28001.745 - 28120.902: 99.4697% ( 8) 00:13:43.436 29789.091 - 29908.247: 99.4792% ( 1) 00:13:43.436 29908.247 - 30027.404: 99.5170% ( 4) 00:13:43.436 30027.404 - 30146.560: 99.5455% ( 3) 00:13:43.436 30146.560 - 30265.716: 99.5833% ( 4) 00:13:43.436 30265.716 - 30384.873: 99.6117% ( 3) 00:13:43.436 30384.873 - 30504.029: 99.6402% ( 3) 00:13:43.436 30504.029 - 30742.342: 99.6970% ( 6) 00:13:43.436 30742.342 - 30980.655: 99.7443% ( 5) 00:13:43.436 30980.655 - 31218.967: 99.8011% ( 6) 00:13:43.436 31218.967 - 31457.280: 99.8580% ( 6) 00:13:43.436 31457.280 - 31695.593: 99.9242% ( 7) 00:13:43.436 31695.593 - 31933.905: 99.9811% ( 6) 00:13:43.436 31933.905 - 32172.218: 100.0000% ( 2) 00:13:43.436 00:13:43.436 10:19:37 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:13:43.436 00:13:43.436 real 0m2.815s 00:13:43.436 user 0m2.318s 00:13:43.436 sys 0m0.376s 00:13:43.436 10:19:37 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.436 ************************************ 00:13:43.436 END TEST nvme_perf 00:13:43.436 10:19:37 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:13:43.437 ************************************ 00:13:43.437 10:19:37 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:43.437 10:19:37 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:43.437 10:19:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.437 10:19:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.695 ************************************ 00:13:43.695 START TEST nvme_hello_world 00:13:43.695 ************************************ 00:13:43.695 10:19:37 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:43.954 Initializing NVMe Controllers 00:13:43.954 Attached to 0000:00:10.0 00:13:43.954 Namespace ID: 1 size: 6GB 00:13:43.954 Attached to 0000:00:11.0 00:13:43.954 Namespace ID: 1 size: 5GB 00:13:43.954 Attached to 0000:00:13.0 00:13:43.954 Namespace ID: 1 size: 1GB 00:13:43.954 Attached to 0000:00:12.0 00:13:43.954 Namespace ID: 1 size: 4GB 00:13:43.954 Namespace ID: 2 size: 4GB 00:13:43.954 Namespace ID: 3 size: 4GB 00:13:43.954 Initialization complete. 00:13:43.954 INFO: using host memory buffer for IO 00:13:43.954 Hello world! 00:13:43.954 INFO: using host memory buffer for IO 00:13:43.954 Hello world! 00:13:43.954 INFO: using host memory buffer for IO 00:13:43.954 Hello world! 00:13:43.954 INFO: using host memory buffer for IO 00:13:43.954 Hello world! 00:13:43.954 INFO: using host memory buffer for IO 00:13:43.954 Hello world! 00:13:43.954 INFO: using host memory buffer for IO 00:13:43.954 Hello world! 00:13:43.954 ************************************ 00:13:43.954 END TEST nvme_hello_world 00:13:43.954 ************************************ 00:13:43.954 00:13:43.954 real 0m0.386s 00:13:43.954 user 0m0.158s 00:13:43.954 sys 0m0.182s 00:13:43.954 10:19:38 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.955 10:19:38 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:43.955 10:19:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:43.955 10:19:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.955 10:19:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.955 10:19:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.955 ************************************ 00:13:43.955 START TEST nvme_sgl 00:13:43.955 ************************************ 00:13:43.955 10:19:38 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:44.532 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:13:44.532 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:13:44.532 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:13:44.532 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:13:44.532 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:13:44.532 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:13:44.532 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:13:44.532 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:13:44.532 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:13:44.532 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:13:44.532 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:13:44.532 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:13:44.532 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:13:44.532 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:13:44.532 NVMe Readv/Writev Request test 00:13:44.532 Attached to 0000:00:10.0 00:13:44.532 Attached to 0000:00:11.0 00:13:44.532 Attached to 0000:00:13.0 00:13:44.532 Attached to 0000:00:12.0 00:13:44.532 0000:00:10.0: build_io_request_2 test passed 00:13:44.532 0000:00:10.0: build_io_request_4 test passed 00:13:44.532 0000:00:10.0: build_io_request_5 test passed 00:13:44.532 0000:00:10.0: build_io_request_6 test passed 00:13:44.532 0000:00:10.0: build_io_request_7 test passed 00:13:44.532 0000:00:10.0: build_io_request_10 test passed 00:13:44.532 0000:00:11.0: build_io_request_2 test passed 00:13:44.532 0000:00:11.0: build_io_request_4 test passed 00:13:44.532 0000:00:11.0: build_io_request_5 test passed 00:13:44.532 0000:00:11.0: build_io_request_6 test passed 00:13:44.532 0000:00:11.0: build_io_request_7 test passed 00:13:44.532 0000:00:11.0: build_io_request_10 test passed 00:13:44.532 Cleaning up... 00:13:44.532 00:13:44.532 real 0m0.520s 00:13:44.532 user 0m0.235s 00:13:44.532 sys 0m0.223s 00:13:44.532 10:19:38 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.532 10:19:38 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:13:44.532 ************************************ 00:13:44.532 END TEST nvme_sgl 00:13:44.532 ************************************ 00:13:44.532 10:19:38 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:44.532 10:19:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.532 10:19:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.532 10:19:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.532 ************************************ 00:13:44.532 START TEST nvme_e2edp 00:13:44.532 ************************************ 00:13:44.532 10:19:38 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:45.099 NVMe Write/Read with End-to-End data protection test 00:13:45.099 Attached to 0000:00:10.0 00:13:45.099 Attached to 0000:00:11.0 00:13:45.099 Attached to 0000:00:13.0 00:13:45.099 Attached to 0000:00:12.0 00:13:45.099 Cleaning up... 00:13:45.099 00:13:45.099 real 0m0.373s 00:13:45.099 user 0m0.139s 00:13:45.099 sys 0m0.181s 00:13:45.099 10:19:39 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.099 10:19:39 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:13:45.099 ************************************ 00:13:45.099 END TEST nvme_e2edp 00:13:45.099 ************************************ 00:13:45.099 10:19:39 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:45.099 10:19:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:45.099 10:19:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.099 10:19:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.099 ************************************ 00:13:45.099 START TEST nvme_reserve 00:13:45.099 ************************************ 00:13:45.099 10:19:39 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:45.357 ===================================================== 00:13:45.357 NVMe Controller at PCI bus 0, device 16, function 0 00:13:45.357 ===================================================== 00:13:45.357 Reservations: Not Supported 00:13:45.357 ===================================================== 00:13:45.357 NVMe Controller at PCI bus 0, device 17, function 0 00:13:45.358 ===================================================== 00:13:45.358 Reservations: Not Supported 00:13:45.358 ===================================================== 00:13:45.358 NVMe Controller at PCI bus 0, device 19, function 0 00:13:45.358 ===================================================== 00:13:45.358 Reservations: Not Supported 00:13:45.358 ===================================================== 00:13:45.358 NVMe Controller at PCI bus 0, device 18, function 0 00:13:45.358 ===================================================== 00:13:45.358 Reservations: Not Supported 00:13:45.358 Reservation test passed 00:13:45.358 ************************************ 00:13:45.358 END TEST nvme_reserve 00:13:45.358 ************************************ 00:13:45.358 00:13:45.358 real 0m0.387s 00:13:45.358 user 0m0.142s 00:13:45.358 sys 0m0.189s 00:13:45.358 10:19:39 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.358 10:19:39 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:13:45.358 10:19:39 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:45.358 10:19:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:45.358 10:19:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.358 10:19:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.358 ************************************ 00:13:45.358 START TEST nvme_err_injection 00:13:45.358 ************************************ 00:13:45.358 10:19:39 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:45.925 NVMe Error Injection test 00:13:45.925 Attached to 0000:00:10.0 00:13:45.925 Attached to 0000:00:11.0 00:13:45.925 Attached to 0000:00:13.0 00:13:45.925 Attached to 0000:00:12.0 00:13:45.925 0000:00:10.0: get features failed as expected 00:13:45.925 0000:00:11.0: get features failed as expected 00:13:45.925 0000:00:13.0: get features failed as expected 00:13:45.925 0000:00:12.0: get features failed as expected 00:13:45.925 0000:00:10.0: get features successfully as expected 00:13:45.925 0000:00:11.0: get features successfully as expected 00:13:45.925 0000:00:13.0: get features successfully as expected 00:13:45.925 0000:00:12.0: get features successfully as expected 00:13:45.925 0000:00:10.0: read failed as expected 00:13:45.925 0000:00:11.0: read failed as expected 00:13:45.925 0000:00:13.0: read failed as expected 00:13:45.925 0000:00:12.0: read failed as expected 00:13:45.925 0000:00:12.0: read successfully as expected 00:13:45.925 0000:00:10.0: read successfully as expected 00:13:45.925 0000:00:11.0: read successfully as expected 00:13:45.925 0000:00:13.0: read successfully as expected 00:13:45.925 Cleaning up... 00:13:45.925 00:13:45.925 real 0m0.396s 00:13:45.925 user 0m0.156s 00:13:45.925 sys 0m0.196s 00:13:45.925 ************************************ 00:13:45.925 END TEST nvme_err_injection 00:13:45.925 ************************************ 00:13:45.925 10:19:40 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.925 10:19:40 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:13:45.925 10:19:40 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:45.925 10:19:40 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:13:45.925 10:19:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.925 10:19:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.925 ************************************ 00:13:45.925 START TEST nvme_overhead 00:13:45.925 ************************************ 00:13:45.925 10:19:40 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:47.302 Initializing NVMe Controllers 00:13:47.302 Attached to 0000:00:10.0 00:13:47.302 Attached to 0000:00:11.0 00:13:47.302 Attached to 0000:00:13.0 00:13:47.302 Attached to 0000:00:12.0 00:13:47.302 Initialization complete. Launching workers. 00:13:47.302 submit (in ns) avg, min, max = 17802.6, 13852.7, 113442.7 00:13:47.303 complete (in ns) avg, min, max = 13039.6, 9163.6, 101017.7 00:13:47.303 00:13:47.303 Submit histogram 00:13:47.303 ================ 00:13:47.303 Range in us Cumulative Count 00:13:47.303 13.847 - 13.905: 0.0375% ( 3) 00:13:47.303 13.905 - 13.964: 0.1000% ( 5) 00:13:47.303 13.964 - 14.022: 0.2001% ( 8) 00:13:47.303 14.022 - 14.080: 0.3876% ( 15) 00:13:47.303 14.080 - 14.138: 0.5502% ( 13) 00:13:47.303 14.138 - 14.196: 0.8503% ( 24) 00:13:47.303 14.196 - 14.255: 1.2755% ( 34) 00:13:47.303 14.255 - 14.313: 1.8257% ( 44) 00:13:47.303 14.313 - 14.371: 2.4634% ( 51) 00:13:47.303 14.371 - 14.429: 3.1762% ( 57) 00:13:47.303 14.429 - 14.487: 4.0515% ( 70) 00:13:47.303 14.487 - 14.545: 4.9018% ( 68) 00:13:47.303 14.545 - 14.604: 6.2148% ( 105) 00:13:47.303 14.604 - 14.662: 7.4903% ( 102) 00:13:47.303 14.662 - 14.720: 8.7533% ( 101) 00:13:47.303 14.720 - 14.778: 9.9787% ( 98) 00:13:47.303 14.778 - 14.836: 11.4043% ( 114) 00:13:47.303 14.836 - 14.895: 12.8548% ( 116) 00:13:47.303 14.895 - 15.011: 16.1436% ( 263) 00:13:47.303 15.011 - 15.127: 19.5448% ( 272) 00:13:47.303 15.127 - 15.244: 23.5963% ( 324) 00:13:47.303 15.244 - 15.360: 29.5611% ( 477) 00:13:47.303 15.360 - 15.476: 37.4265% ( 629) 00:13:47.303 15.476 - 15.593: 44.5792% ( 572) 00:13:47.303 15.593 - 15.709: 50.0438% ( 437) 00:13:47.303 15.709 - 15.825: 54.5705% ( 362) 00:13:47.303 15.825 - 15.942: 57.5841% ( 241) 00:13:47.303 15.942 - 16.058: 59.8975% ( 185) 00:13:47.303 16.058 - 16.175: 61.2605% ( 109) 00:13:47.303 16.175 - 16.291: 62.4984% ( 99) 00:13:47.303 16.291 - 16.407: 63.8865% ( 111) 00:13:47.303 16.407 - 16.524: 64.6868% ( 64) 00:13:47.303 16.524 - 16.640: 65.4495% ( 61) 00:13:47.303 16.640 - 16.756: 65.8747% ( 34) 00:13:47.303 16.756 - 16.873: 66.2749% ( 32) 00:13:47.303 16.873 - 16.989: 66.6125% ( 27) 00:13:47.303 16.989 - 17.105: 66.8251% ( 17) 00:13:47.303 17.105 - 17.222: 67.0126% ( 15) 00:13:47.303 17.222 - 17.338: 67.2127% ( 16) 00:13:47.303 17.338 - 17.455: 67.4128% ( 16) 00:13:47.303 17.455 - 17.571: 67.6754% ( 21) 00:13:47.303 17.571 - 17.687: 67.8755% ( 16) 00:13:47.303 17.687 - 17.804: 68.0130% ( 11) 00:13:47.303 17.804 - 17.920: 68.1005% ( 7) 00:13:47.303 17.920 - 18.036: 68.1881% ( 7) 00:13:47.303 18.036 - 18.153: 68.2256% ( 3) 00:13:47.303 18.153 - 18.269: 68.3006% ( 6) 00:13:47.303 18.269 - 18.385: 68.3256% ( 2) 00:13:47.303 18.385 - 18.502: 68.4132% ( 7) 00:13:47.303 18.502 - 18.618: 68.6507% ( 19) 00:13:47.303 18.618 - 18.735: 69.7261% ( 86) 00:13:47.303 18.735 - 18.851: 72.2896% ( 205) 00:13:47.303 18.851 - 18.967: 75.6159% ( 266) 00:13:47.303 18.967 - 19.084: 78.4794% ( 229) 00:13:47.303 19.084 - 19.200: 80.9429% ( 197) 00:13:47.303 19.200 - 19.316: 82.6310% ( 135) 00:13:47.303 19.316 - 19.433: 84.0440% ( 113) 00:13:47.303 19.433 - 19.549: 84.9193% ( 70) 00:13:47.303 19.549 - 19.665: 85.6196% ( 56) 00:13:47.303 19.665 - 19.782: 86.3449% ( 58) 00:13:47.303 19.782 - 19.898: 86.7575% ( 33) 00:13:47.303 19.898 - 20.015: 87.2702% ( 41) 00:13:47.303 20.015 - 20.131: 87.8204% ( 44) 00:13:47.303 20.131 - 20.247: 88.1706% ( 28) 00:13:47.303 20.247 - 20.364: 88.4707% ( 24) 00:13:47.303 20.364 - 20.480: 88.7708% ( 24) 00:13:47.303 20.480 - 20.596: 88.8583% ( 7) 00:13:47.303 20.596 - 20.713: 88.9959% ( 11) 00:13:47.303 20.713 - 20.829: 89.0834% ( 7) 00:13:47.303 20.829 - 20.945: 89.1334% ( 4) 00:13:47.303 20.945 - 21.062: 89.1959% ( 5) 00:13:47.303 21.062 - 21.178: 89.2460% ( 4) 00:13:47.303 21.178 - 21.295: 89.3460% ( 8) 00:13:47.303 21.295 - 21.411: 89.3710% ( 2) 00:13:47.303 21.411 - 21.527: 89.4460% ( 6) 00:13:47.303 21.527 - 21.644: 89.5711% ( 10) 00:13:47.303 21.644 - 21.760: 89.6586% ( 7) 00:13:47.303 21.760 - 21.876: 89.7712% ( 9) 00:13:47.303 21.876 - 21.993: 89.8712% ( 8) 00:13:47.303 21.993 - 22.109: 89.9837% ( 9) 00:13:47.303 22.109 - 22.225: 90.0338% ( 4) 00:13:47.303 22.225 - 22.342: 90.0963% ( 5) 00:13:47.303 22.342 - 22.458: 90.1588% ( 5) 00:13:47.303 22.458 - 22.575: 90.1713% ( 1) 00:13:47.303 22.575 - 22.691: 90.1838% ( 1) 00:13:47.303 22.807 - 22.924: 90.2338% ( 4) 00:13:47.303 22.924 - 23.040: 90.2839% ( 4) 00:13:47.303 23.040 - 23.156: 90.3339% ( 4) 00:13:47.303 23.156 - 23.273: 90.3839% ( 4) 00:13:47.303 23.389 - 23.505: 90.4339% ( 4) 00:13:47.303 23.505 - 23.622: 90.5340% ( 8) 00:13:47.303 23.622 - 23.738: 90.5590% ( 2) 00:13:47.303 23.738 - 23.855: 90.6840% ( 10) 00:13:47.303 23.855 - 23.971: 90.7840% ( 8) 00:13:47.303 23.971 - 24.087: 90.8466% ( 5) 00:13:47.303 24.087 - 24.204: 90.9591% ( 9) 00:13:47.303 24.204 - 24.320: 91.0967% ( 11) 00:13:47.303 24.320 - 24.436: 91.1592% ( 5) 00:13:47.303 24.436 - 24.553: 91.2467% ( 7) 00:13:47.303 24.553 - 24.669: 91.3217% ( 6) 00:13:47.303 24.669 - 24.785: 91.4593% ( 11) 00:13:47.303 24.785 - 24.902: 91.5718% ( 9) 00:13:47.303 24.902 - 25.018: 91.6969% ( 10) 00:13:47.303 25.018 - 25.135: 91.7469% ( 4) 00:13:47.303 25.135 - 25.251: 91.8219% ( 6) 00:13:47.303 25.251 - 25.367: 91.8970% ( 6) 00:13:47.303 25.367 - 25.484: 91.9720% ( 6) 00:13:47.303 25.484 - 25.600: 92.1220% ( 12) 00:13:47.303 25.600 - 25.716: 92.1596% ( 3) 00:13:47.303 25.716 - 25.833: 92.2096% ( 4) 00:13:47.303 25.833 - 25.949: 92.2596% ( 4) 00:13:47.303 25.949 - 26.065: 92.3096% ( 4) 00:13:47.303 26.065 - 26.182: 92.3471% ( 3) 00:13:47.303 26.182 - 26.298: 92.3971% ( 4) 00:13:47.303 26.298 - 26.415: 92.4347% ( 3) 00:13:47.303 26.415 - 26.531: 92.4722% ( 3) 00:13:47.303 26.531 - 26.647: 92.5347% ( 5) 00:13:47.303 26.647 - 26.764: 92.5722% ( 3) 00:13:47.303 26.764 - 26.880: 92.6347% ( 5) 00:13:47.303 26.880 - 26.996: 92.6723% ( 3) 00:13:47.303 26.996 - 27.113: 92.6973% ( 2) 00:13:47.303 27.113 - 27.229: 92.7223% ( 2) 00:13:47.303 27.229 - 27.345: 92.8098% ( 7) 00:13:47.303 27.345 - 27.462: 92.8223% ( 1) 00:13:47.303 27.462 - 27.578: 92.8723% ( 4) 00:13:47.303 27.578 - 27.695: 92.9223% ( 4) 00:13:47.303 27.695 - 27.811: 92.9724% ( 4) 00:13:47.303 27.811 - 27.927: 93.0474% ( 6) 00:13:47.303 27.927 - 28.044: 93.1724% ( 10) 00:13:47.303 28.044 - 28.160: 93.2100% ( 3) 00:13:47.303 28.160 - 28.276: 93.2475% ( 3) 00:13:47.303 28.276 - 28.393: 93.3100% ( 5) 00:13:47.303 28.393 - 28.509: 93.3475% ( 3) 00:13:47.303 28.509 - 28.625: 93.4100% ( 5) 00:13:47.303 28.625 - 28.742: 93.4600% ( 4) 00:13:47.303 28.742 - 28.858: 93.5226% ( 5) 00:13:47.303 28.858 - 28.975: 93.5851% ( 5) 00:13:47.303 28.975 - 29.091: 93.6101% ( 2) 00:13:47.303 29.091 - 29.207: 93.6851% ( 6) 00:13:47.303 29.207 - 29.324: 93.7226% ( 3) 00:13:47.303 29.324 - 29.440: 93.8102% ( 7) 00:13:47.303 29.440 - 29.556: 93.9352% ( 10) 00:13:47.303 29.556 - 29.673: 94.0978% ( 13) 00:13:47.303 29.673 - 29.789: 94.2603% ( 13) 00:13:47.303 29.789 - 30.022: 94.4229% ( 13) 00:13:47.303 30.022 - 30.255: 94.7855% ( 29) 00:13:47.303 30.255 - 30.487: 95.1732% ( 31) 00:13:47.303 30.487 - 30.720: 95.7609% ( 47) 00:13:47.303 30.720 - 30.953: 96.2611% ( 40) 00:13:47.303 30.953 - 31.185: 96.7113% ( 36) 00:13:47.303 31.185 - 31.418: 97.0739% ( 29) 00:13:47.303 31.418 - 31.651: 97.5491% ( 38) 00:13:47.303 31.651 - 31.884: 97.9117% ( 29) 00:13:47.303 31.884 - 32.116: 98.0618% ( 12) 00:13:47.303 32.116 - 32.349: 98.1493% ( 7) 00:13:47.303 32.349 - 32.582: 98.2869% ( 11) 00:13:47.303 32.582 - 32.815: 98.3869% ( 8) 00:13:47.303 32.815 - 33.047: 98.4994% ( 9) 00:13:47.303 33.047 - 33.280: 98.6245% ( 10) 00:13:47.303 33.280 - 33.513: 98.7620% ( 11) 00:13:47.303 33.513 - 33.745: 98.7870% ( 2) 00:13:47.303 33.745 - 33.978: 98.8496% ( 5) 00:13:47.303 33.978 - 34.211: 98.8746% ( 2) 00:13:47.303 34.211 - 34.444: 98.8996% ( 2) 00:13:47.303 34.444 - 34.676: 98.9496% ( 4) 00:13:47.303 34.676 - 34.909: 98.9621% ( 1) 00:13:47.303 34.909 - 35.142: 98.9746% ( 1) 00:13:47.303 35.142 - 35.375: 98.9996% ( 2) 00:13:47.303 35.375 - 35.607: 99.0121% ( 1) 00:13:47.303 35.840 - 36.073: 99.0246% ( 1) 00:13:47.303 36.073 - 36.305: 99.0496% ( 2) 00:13:47.303 36.305 - 36.538: 99.0747% ( 2) 00:13:47.303 36.538 - 36.771: 99.0872% ( 1) 00:13:47.304 36.771 - 37.004: 99.0997% ( 1) 00:13:47.304 37.004 - 37.236: 99.1122% ( 1) 00:13:47.304 37.236 - 37.469: 99.1247% ( 1) 00:13:47.304 37.469 - 37.702: 99.1622% ( 3) 00:13:47.304 37.702 - 37.935: 99.1747% ( 1) 00:13:47.304 37.935 - 38.167: 99.2122% ( 3) 00:13:47.304 38.400 - 38.633: 99.2247% ( 1) 00:13:47.304 38.865 - 39.098: 99.2372% ( 1) 00:13:47.304 39.098 - 39.331: 99.2497% ( 1) 00:13:47.304 39.331 - 39.564: 99.2747% ( 2) 00:13:47.304 39.564 - 39.796: 99.3247% ( 4) 00:13:47.304 39.796 - 40.029: 99.3623% ( 3) 00:13:47.304 40.029 - 40.262: 99.3748% ( 1) 00:13:47.304 40.262 - 40.495: 99.3998% ( 2) 00:13:47.304 40.495 - 40.727: 99.4623% ( 5) 00:13:47.304 40.727 - 40.960: 99.4873% ( 2) 00:13:47.304 41.193 - 41.425: 99.4998% ( 1) 00:13:47.304 41.658 - 41.891: 99.5123% ( 1) 00:13:47.304 41.891 - 42.124: 99.5248% ( 1) 00:13:47.304 42.124 - 42.356: 99.5373% ( 1) 00:13:47.304 42.589 - 42.822: 99.5498% ( 1) 00:13:47.304 42.822 - 43.055: 99.5623% ( 1) 00:13:47.304 43.287 - 43.520: 99.6249% ( 5) 00:13:47.304 44.684 - 44.916: 99.6499% ( 2) 00:13:47.304 45.615 - 45.847: 99.6874% ( 3) 00:13:47.304 46.313 - 46.545: 99.7124% ( 2) 00:13:47.304 46.545 - 46.778: 99.7374% ( 2) 00:13:47.304 46.778 - 47.011: 99.7749% ( 3) 00:13:47.304 47.476 - 47.709: 99.7874% ( 1) 00:13:47.304 47.942 - 48.175: 99.7999% ( 1) 00:13:47.304 49.804 - 50.036: 99.8124% ( 1) 00:13:47.304 50.036 - 50.269: 99.8374% ( 2) 00:13:47.304 51.898 - 52.131: 99.8499% ( 1) 00:13:47.304 54.458 - 54.691: 99.8624% ( 1) 00:13:47.304 55.855 - 56.087: 99.8750% ( 1) 00:13:47.304 56.553 - 56.785: 99.8875% ( 1) 00:13:47.304 57.484 - 57.716: 99.9000% ( 1) 00:13:47.304 60.975 - 61.440: 99.9125% ( 1) 00:13:47.304 62.836 - 63.302: 99.9250% ( 1) 00:13:47.304 69.818 - 70.284: 99.9375% ( 1) 00:13:47.304 81.920 - 82.385: 99.9500% ( 1) 00:13:47.304 82.851 - 83.316: 99.9625% ( 1) 00:13:47.304 94.953 - 95.418: 99.9750% ( 1) 00:13:47.304 98.676 - 99.142: 99.9875% ( 1) 00:13:47.304 113.105 - 113.571: 100.0000% ( 1) 00:13:47.304 00:13:47.304 Complete histogram 00:13:47.304 ================== 00:13:47.304 Range in us Cumulative Count 00:13:47.304 9.135 - 9.193: 0.0125% ( 1) 00:13:47.304 9.193 - 9.251: 0.0250% ( 1) 00:13:47.304 9.251 - 9.309: 0.1125% ( 7) 00:13:47.304 9.309 - 9.367: 0.2001% ( 7) 00:13:47.304 9.367 - 9.425: 0.2876% ( 7) 00:13:47.304 9.425 - 9.484: 0.4377% ( 12) 00:13:47.304 9.484 - 9.542: 0.7003% ( 21) 00:13:47.304 9.542 - 9.600: 1.1004% ( 32) 00:13:47.304 9.600 - 9.658: 1.6131% ( 41) 00:13:47.304 9.658 - 9.716: 2.2633% ( 52) 00:13:47.304 9.716 - 9.775: 3.0762% ( 65) 00:13:47.304 9.775 - 9.833: 3.9015% ( 66) 00:13:47.304 9.833 - 9.891: 4.9769% ( 86) 00:13:47.304 9.891 - 9.949: 5.9897% ( 81) 00:13:47.304 9.949 - 10.007: 6.7650% ( 62) 00:13:47.304 10.007 - 10.065: 7.3903% ( 50) 00:13:47.304 10.065 - 10.124: 8.3156% ( 74) 00:13:47.304 10.124 - 10.182: 9.4160% ( 88) 00:13:47.304 10.182 - 10.240: 10.2288% ( 65) 00:13:47.304 10.240 - 10.298: 11.1667% ( 75) 00:13:47.304 10.298 - 10.356: 12.1671% ( 80) 00:13:47.304 10.356 - 10.415: 13.8802% ( 137) 00:13:47.304 10.415 - 10.473: 16.5562% ( 214) 00:13:47.304 10.473 - 10.531: 20.3451% ( 303) 00:13:47.304 10.531 - 10.589: 24.4842% ( 331) 00:13:47.304 10.589 - 10.647: 28.7983% ( 345) 00:13:47.304 10.647 - 10.705: 32.3746% ( 286) 00:13:47.304 10.705 - 10.764: 35.4133% ( 243) 00:13:47.304 10.764 - 10.822: 37.4515% ( 163) 00:13:47.304 10.822 - 10.880: 39.1522% ( 136) 00:13:47.304 10.880 - 10.938: 40.4277% ( 102) 00:13:47.304 10.938 - 10.996: 41.2405% ( 65) 00:13:47.304 10.996 - 11.055: 42.2283% ( 79) 00:13:47.304 11.055 - 11.113: 43.4413% ( 97) 00:13:47.304 11.113 - 11.171: 44.7918% ( 108) 00:13:47.304 11.171 - 11.229: 46.2548% ( 117) 00:13:47.304 11.229 - 11.287: 47.1177% ( 69) 00:13:47.304 11.287 - 11.345: 48.0930% ( 78) 00:13:47.304 11.345 - 11.404: 49.2560% ( 93) 00:13:47.304 11.404 - 11.462: 50.6315% ( 110) 00:13:47.304 11.462 - 11.520: 52.0695% ( 115) 00:13:47.304 11.520 - 11.578: 53.2825% ( 97) 00:13:47.304 11.578 - 11.636: 54.0828% ( 64) 00:13:47.304 11.636 - 11.695: 54.9456% ( 69) 00:13:47.304 11.695 - 11.753: 55.9085% ( 77) 00:13:47.304 11.753 - 11.811: 56.9589% ( 84) 00:13:47.304 11.811 - 11.869: 58.1968% ( 99) 00:13:47.304 11.869 - 11.927: 59.0346% ( 67) 00:13:47.304 11.927 - 11.985: 59.7849% ( 60) 00:13:47.304 11.985 - 12.044: 60.3726% ( 47) 00:13:47.304 12.044 - 12.102: 60.7728% ( 32) 00:13:47.304 12.102 - 12.160: 61.1229% ( 28) 00:13:47.304 12.160 - 12.218: 61.7106% ( 47) 00:13:47.304 12.218 - 12.276: 62.2233% ( 41) 00:13:47.304 12.276 - 12.335: 62.7485% ( 42) 00:13:47.304 12.335 - 12.393: 63.2362% ( 39) 00:13:47.304 12.393 - 12.451: 63.7989% ( 45) 00:13:47.304 12.451 - 12.509: 64.2741% ( 38) 00:13:47.304 12.509 - 12.567: 64.7243% ( 36) 00:13:47.304 12.567 - 12.625: 65.1369% ( 33) 00:13:47.304 12.625 - 12.684: 65.6496% ( 41) 00:13:47.304 12.684 - 12.742: 65.9872% ( 27) 00:13:47.304 12.742 - 12.800: 66.2874% ( 24) 00:13:47.304 12.800 - 12.858: 66.5875% ( 24) 00:13:47.304 12.858 - 12.916: 66.7250% ( 11) 00:13:47.304 12.916 - 12.975: 66.9376% ( 17) 00:13:47.304 12.975 - 13.033: 67.4128% ( 38) 00:13:47.304 13.033 - 13.091: 68.3131% ( 72) 00:13:47.304 13.091 - 13.149: 70.1263% ( 145) 00:13:47.304 13.149 - 13.207: 72.1896% ( 165) 00:13:47.304 13.207 - 13.265: 74.4779% ( 183) 00:13:47.304 13.265 - 13.324: 76.5287% ( 164) 00:13:47.304 13.324 - 13.382: 78.4544% ( 154) 00:13:47.304 13.382 - 13.440: 80.1175% ( 133) 00:13:47.304 13.440 - 13.498: 81.2305% ( 89) 00:13:47.304 13.498 - 13.556: 82.1933% ( 77) 00:13:47.304 13.556 - 13.615: 83.0561% ( 69) 00:13:47.304 13.615 - 13.673: 83.7564% ( 56) 00:13:47.304 13.673 - 13.731: 84.0940% ( 27) 00:13:47.304 13.731 - 13.789: 84.4817% ( 31) 00:13:47.304 13.789 - 13.847: 84.7443% ( 21) 00:13:47.304 13.847 - 13.905: 84.9694% ( 18) 00:13:47.304 13.905 - 13.964: 85.1694% ( 16) 00:13:47.304 13.964 - 14.022: 85.3070% ( 11) 00:13:47.304 14.022 - 14.080: 85.4696% ( 13) 00:13:47.304 14.080 - 14.138: 85.5946% ( 10) 00:13:47.304 14.138 - 14.196: 85.7321% ( 11) 00:13:47.304 14.196 - 14.255: 85.8197% ( 7) 00:13:47.304 14.255 - 14.313: 85.9072% ( 7) 00:13:47.304 14.313 - 14.371: 85.9697% ( 5) 00:13:47.304 14.371 - 14.429: 86.0573% ( 7) 00:13:47.304 14.429 - 14.487: 86.1573% ( 8) 00:13:47.304 14.487 - 14.545: 86.2573% ( 8) 00:13:47.304 14.545 - 14.604: 86.3699% ( 9) 00:13:47.304 14.604 - 14.662: 86.4949% ( 10) 00:13:47.304 14.662 - 14.720: 86.6575% ( 13) 00:13:47.304 14.720 - 14.778: 86.7825% ( 10) 00:13:47.304 14.778 - 14.836: 86.8576% ( 6) 00:13:47.304 14.836 - 14.895: 86.9451% ( 7) 00:13:47.304 14.895 - 15.011: 87.3077% ( 29) 00:13:47.304 15.011 - 15.127: 87.8204% ( 41) 00:13:47.304 15.127 - 15.244: 88.3331% ( 41) 00:13:47.304 15.244 - 15.360: 88.7708% ( 35) 00:13:47.304 15.360 - 15.476: 89.2085% ( 35) 00:13:47.304 15.476 - 15.593: 89.4961% ( 23) 00:13:47.304 15.593 - 15.709: 89.7587% ( 21) 00:13:47.304 15.709 - 15.825: 89.9462% ( 15) 00:13:47.304 15.825 - 15.942: 90.0213% ( 6) 00:13:47.304 15.942 - 16.058: 90.0588% ( 3) 00:13:47.304 16.058 - 16.175: 90.1463% ( 7) 00:13:47.304 16.175 - 16.291: 90.2213% ( 6) 00:13:47.304 16.291 - 16.407: 90.2964% ( 6) 00:13:47.304 16.407 - 16.524: 90.3339% ( 3) 00:13:47.304 16.524 - 16.640: 90.4089% ( 6) 00:13:47.304 16.640 - 16.756: 90.4464% ( 3) 00:13:47.304 16.756 - 16.873: 90.4839% ( 3) 00:13:47.304 16.873 - 16.989: 90.5465% ( 5) 00:13:47.304 16.989 - 17.105: 90.5715% ( 2) 00:13:47.304 17.105 - 17.222: 90.5965% ( 2) 00:13:47.304 17.222 - 17.338: 90.6215% ( 2) 00:13:47.304 17.338 - 17.455: 90.6465% ( 2) 00:13:47.304 17.455 - 17.571: 90.6840% ( 3) 00:13:47.304 17.687 - 17.804: 90.7090% ( 2) 00:13:47.304 17.804 - 17.920: 90.7340% ( 2) 00:13:47.304 17.920 - 18.036: 90.7590% ( 2) 00:13:47.304 18.036 - 18.153: 90.7840% ( 2) 00:13:47.304 18.153 - 18.269: 90.8091% ( 2) 00:13:47.304 18.269 - 18.385: 90.8216% ( 1) 00:13:47.304 18.385 - 18.502: 90.8466% ( 2) 00:13:47.304 18.502 - 18.618: 90.8716% ( 2) 00:13:47.304 18.618 - 18.735: 90.8966% ( 2) 00:13:47.304 18.735 - 18.851: 90.9341% ( 3) 00:13:47.304 18.851 - 18.967: 90.9841% ( 4) 00:13:47.304 18.967 - 19.084: 91.0466% ( 5) 00:13:47.304 19.084 - 19.200: 91.1092% ( 5) 00:13:47.304 19.200 - 19.316: 91.1967% ( 7) 00:13:47.304 19.316 - 19.433: 91.2842% ( 7) 00:13:47.304 19.433 - 19.549: 91.3968% ( 9) 00:13:47.304 19.549 - 19.665: 91.4843% ( 7) 00:13:47.304 19.665 - 19.782: 91.5593% ( 6) 00:13:47.304 19.782 - 19.898: 91.6469% ( 7) 00:13:47.304 19.898 - 20.015: 91.6844% ( 3) 00:13:47.305 20.015 - 20.131: 91.7719% ( 7) 00:13:47.305 20.131 - 20.247: 91.8094% ( 3) 00:13:47.305 20.247 - 20.364: 91.9095% ( 8) 00:13:47.305 20.364 - 20.480: 91.9845% ( 6) 00:13:47.305 20.480 - 20.596: 92.0845% ( 8) 00:13:47.305 20.596 - 20.713: 92.1220% ( 3) 00:13:47.305 20.713 - 20.829: 92.2096% ( 7) 00:13:47.305 20.829 - 20.945: 92.2221% ( 1) 00:13:47.305 20.945 - 21.062: 92.2846% ( 5) 00:13:47.305 21.062 - 21.178: 92.3721% ( 7) 00:13:47.305 21.178 - 21.295: 92.4347% ( 5) 00:13:47.305 21.295 - 21.411: 92.4972% ( 5) 00:13:47.305 21.411 - 21.527: 92.5597% ( 5) 00:13:47.305 21.527 - 21.644: 92.6097% ( 4) 00:13:47.305 21.644 - 21.760: 92.6472% ( 3) 00:13:47.305 21.760 - 21.876: 92.6723% ( 2) 00:13:47.305 21.876 - 21.993: 92.7223% ( 4) 00:13:47.305 21.993 - 22.109: 92.7723% ( 4) 00:13:47.305 22.109 - 22.225: 92.8098% ( 3) 00:13:47.305 22.225 - 22.342: 92.8348% ( 2) 00:13:47.305 22.342 - 22.458: 92.8473% ( 1) 00:13:47.305 22.458 - 22.575: 92.8848% ( 3) 00:13:47.305 22.575 - 22.691: 92.8973% ( 1) 00:13:47.305 22.691 - 22.807: 92.9098% ( 1) 00:13:47.305 22.807 - 22.924: 92.9349% ( 2) 00:13:47.305 22.924 - 23.040: 92.9974% ( 5) 00:13:47.305 23.040 - 23.156: 93.0099% ( 1) 00:13:47.305 23.156 - 23.273: 93.0724% ( 5) 00:13:47.305 23.273 - 23.389: 93.0974% ( 2) 00:13:47.305 23.389 - 23.505: 93.1474% ( 4) 00:13:47.305 23.505 - 23.622: 93.1599% ( 1) 00:13:47.305 23.622 - 23.738: 93.1849% ( 2) 00:13:47.305 23.738 - 23.855: 93.2225% ( 3) 00:13:47.305 23.855 - 23.971: 93.2475% ( 2) 00:13:47.305 23.971 - 24.087: 93.3225% ( 6) 00:13:47.305 24.087 - 24.204: 93.3850% ( 5) 00:13:47.305 24.204 - 24.320: 93.4350% ( 4) 00:13:47.305 24.436 - 24.553: 93.5101% ( 6) 00:13:47.305 24.553 - 24.669: 93.5226% ( 1) 00:13:47.305 24.669 - 24.785: 93.5601% ( 3) 00:13:47.305 24.785 - 24.902: 93.6351% ( 6) 00:13:47.305 24.902 - 25.018: 93.6476% ( 1) 00:13:47.305 25.018 - 25.135: 93.7101% ( 5) 00:13:47.305 25.135 - 25.251: 93.7727% ( 5) 00:13:47.305 25.251 - 25.367: 93.8727% ( 8) 00:13:47.305 25.367 - 25.484: 93.9727% ( 8) 00:13:47.305 25.484 - 25.600: 94.0478% ( 6) 00:13:47.305 25.600 - 25.716: 94.1853% ( 11) 00:13:47.305 25.716 - 25.833: 94.3229% ( 11) 00:13:47.305 25.833 - 25.949: 94.5980% ( 22) 00:13:47.305 25.949 - 26.065: 94.9231% ( 26) 00:13:47.305 26.065 - 26.182: 95.1357% ( 17) 00:13:47.305 26.182 - 26.298: 95.4483% ( 25) 00:13:47.305 26.298 - 26.415: 95.7734% ( 26) 00:13:47.305 26.415 - 26.531: 96.0860% ( 25) 00:13:47.305 26.531 - 26.647: 96.4237% ( 27) 00:13:47.305 26.647 - 26.764: 96.7863% ( 29) 00:13:47.305 26.764 - 26.880: 97.1614% ( 30) 00:13:47.305 26.880 - 26.996: 97.5116% ( 28) 00:13:47.305 26.996 - 27.113: 97.7492% ( 19) 00:13:47.305 27.113 - 27.229: 97.9367% ( 15) 00:13:47.305 27.229 - 27.345: 97.9867% ( 4) 00:13:47.305 27.345 - 27.462: 98.0618% ( 6) 00:13:47.305 27.462 - 27.578: 98.1618% ( 8) 00:13:47.305 27.578 - 27.695: 98.3119% ( 12) 00:13:47.305 27.695 - 27.811: 98.3994% ( 7) 00:13:47.305 27.811 - 27.927: 98.5370% ( 11) 00:13:47.305 27.927 - 28.044: 98.5870% ( 4) 00:13:47.305 28.044 - 28.160: 98.6245% ( 3) 00:13:47.305 28.160 - 28.276: 98.6995% ( 6) 00:13:47.305 28.276 - 28.393: 98.7745% ( 6) 00:13:47.305 28.393 - 28.509: 98.8871% ( 9) 00:13:47.305 28.509 - 28.625: 98.9371% ( 4) 00:13:47.305 28.625 - 28.742: 98.9621% ( 2) 00:13:47.305 28.742 - 28.858: 98.9996% ( 3) 00:13:47.305 28.858 - 28.975: 99.0371% ( 3) 00:13:47.305 28.975 - 29.091: 99.0621% ( 2) 00:13:47.305 29.091 - 29.207: 99.0872% ( 2) 00:13:47.305 29.207 - 29.324: 99.0997% ( 1) 00:13:47.305 29.324 - 29.440: 99.1122% ( 1) 00:13:47.305 29.440 - 29.556: 99.1247% ( 1) 00:13:47.305 29.556 - 29.673: 99.1622% ( 3) 00:13:47.305 29.673 - 29.789: 99.1747% ( 1) 00:13:47.305 29.789 - 30.022: 99.1872% ( 1) 00:13:47.305 30.022 - 30.255: 99.1997% ( 1) 00:13:47.305 30.255 - 30.487: 99.2122% ( 1) 00:13:47.305 30.487 - 30.720: 99.2247% ( 1) 00:13:47.305 30.720 - 30.953: 99.2497% ( 2) 00:13:47.305 30.953 - 31.185: 99.2622% ( 1) 00:13:47.305 31.185 - 31.418: 99.2872% ( 2) 00:13:47.305 32.116 - 32.349: 99.2997% ( 1) 00:13:47.305 32.582 - 32.815: 99.3122% ( 1) 00:13:47.305 33.047 - 33.280: 99.3247% ( 1) 00:13:47.305 33.280 - 33.513: 99.3373% ( 1) 00:13:47.305 33.978 - 34.211: 99.3498% ( 1) 00:13:47.305 34.211 - 34.444: 99.3748% ( 2) 00:13:47.305 34.444 - 34.676: 99.3873% ( 1) 00:13:47.305 34.676 - 34.909: 99.3998% ( 1) 00:13:47.305 35.142 - 35.375: 99.4123% ( 1) 00:13:47.305 35.375 - 35.607: 99.4373% ( 2) 00:13:47.305 35.607 - 35.840: 99.4498% ( 1) 00:13:47.305 35.840 - 36.073: 99.4748% ( 2) 00:13:47.305 36.305 - 36.538: 99.4998% ( 2) 00:13:47.305 36.538 - 36.771: 99.5248% ( 2) 00:13:47.305 36.771 - 37.004: 99.5498% ( 2) 00:13:47.305 37.004 - 37.236: 99.5623% ( 1) 00:13:47.305 37.236 - 37.469: 99.5748% ( 1) 00:13:47.305 37.702 - 37.935: 99.5873% ( 1) 00:13:47.305 37.935 - 38.167: 99.5998% ( 1) 00:13:47.305 38.167 - 38.400: 99.6249% ( 2) 00:13:47.305 38.865 - 39.098: 99.6374% ( 1) 00:13:47.305 39.331 - 39.564: 99.6624% ( 2) 00:13:47.305 41.193 - 41.425: 99.6749% ( 1) 00:13:47.305 42.124 - 42.356: 99.6999% ( 2) 00:13:47.305 42.356 - 42.589: 99.7124% ( 1) 00:13:47.305 42.589 - 42.822: 99.7374% ( 2) 00:13:47.305 42.822 - 43.055: 99.7624% ( 2) 00:13:47.305 43.055 - 43.287: 99.7749% ( 1) 00:13:47.305 43.287 - 43.520: 99.7874% ( 1) 00:13:47.305 43.985 - 44.218: 99.8124% ( 2) 00:13:47.305 44.218 - 44.451: 99.8249% ( 1) 00:13:47.305 44.916 - 45.149: 99.8374% ( 1) 00:13:47.305 45.382 - 45.615: 99.8499% ( 1) 00:13:47.305 46.313 - 46.545: 99.8624% ( 1) 00:13:47.305 47.709 - 47.942: 99.8750% ( 1) 00:13:47.305 48.640 - 48.873: 99.8875% ( 1) 00:13:47.305 50.967 - 51.200: 99.9000% ( 1) 00:13:47.305 53.760 - 53.993: 99.9125% ( 1) 00:13:47.305 55.156 - 55.389: 99.9250% ( 1) 00:13:47.305 63.302 - 63.767: 99.9375% ( 1) 00:13:47.305 64.233 - 64.698: 99.9500% ( 1) 00:13:47.305 76.335 - 76.800: 99.9625% ( 1) 00:13:47.305 77.265 - 77.731: 99.9750% ( 1) 00:13:47.305 83.782 - 84.247: 99.9875% ( 1) 00:13:47.305 101.004 - 101.469: 100.0000% ( 1) 00:13:47.305 00:13:47.305 ************************************ 00:13:47.305 END TEST nvme_overhead 00:13:47.305 ************************************ 00:13:47.305 00:13:47.305 real 0m1.375s 00:13:47.305 user 0m1.124s 00:13:47.305 sys 0m0.187s 00:13:47.305 10:19:41 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.305 10:19:41 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 10:19:41 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:47.305 10:19:41 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:47.305 10:19:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.305 10:19:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.305 ************************************ 00:13:47.305 START TEST nvme_arbitration 00:13:47.305 ************************************ 00:13:47.305 10:19:41 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:51.497 Initializing NVMe Controllers 00:13:51.497 Attached to 0000:00:10.0 00:13:51.497 Attached to 0000:00:11.0 00:13:51.497 Attached to 0000:00:13.0 00:13:51.497 Attached to 0000:00:12.0 00:13:51.497 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:13:51.497 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:13:51.497 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:13:51.497 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:13:51.497 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:13:51.497 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:13:51.497 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:51.497 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:13:51.497 Initialization complete. Launching workers. 00:13:51.497 Starting thread on core 1 with urgent priority queue 00:13:51.497 Starting thread on core 2 with urgent priority queue 00:13:51.497 Starting thread on core 3 with urgent priority queue 00:13:51.497 Starting thread on core 0 with urgent priority queue 00:13:51.497 QEMU NVMe Ctrl (12340 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:13:51.497 QEMU NVMe Ctrl (12342 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:13:51.497 QEMU NVMe Ctrl (12341 ) core 1: 405.33 IO/s 246.71 secs/100000 ios 00:13:51.497 QEMU NVMe Ctrl (12342 ) core 1: 405.33 IO/s 246.71 secs/100000 ios 00:13:51.497 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:13:51.497 QEMU NVMe Ctrl (12342 ) core 3: 298.67 IO/s 334.82 secs/100000 ios 00:13:51.497 ======================================================== 00:13:51.497 00:13:51.497 00:13:51.497 real 0m3.481s 00:13:51.497 user 0m9.377s 00:13:51.497 sys 0m0.212s 00:13:51.497 ************************************ 00:13:51.497 END TEST nvme_arbitration 00:13:51.497 ************************************ 00:13:51.497 10:19:45 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.497 10:19:45 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:13:51.497 10:19:45 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:51.497 10:19:45 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:51.497 10:19:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.497 10:19:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.497 ************************************ 00:13:51.497 START TEST nvme_single_aen 00:13:51.497 ************************************ 00:13:51.497 10:19:45 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:51.497 Asynchronous Event Request test 00:13:51.497 Attached to 0000:00:10.0 00:13:51.497 Attached to 0000:00:11.0 00:13:51.497 Attached to 0000:00:13.0 00:13:51.497 Attached to 0000:00:12.0 00:13:51.497 Reset controller to setup AER completions for this process 00:13:51.497 Registering asynchronous event callbacks... 00:13:51.497 Getting orig temperature thresholds of all controllers 00:13:51.497 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:51.497 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:51.497 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:51.497 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:51.497 Setting all controllers temperature threshold low to trigger AER 00:13:51.497 Waiting for all controllers temperature threshold to be set lower 00:13:51.497 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:51.497 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:51.497 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:51.497 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:51.497 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:51.497 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:51.497 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:51.497 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:51.497 Waiting for all controllers to trigger AER and reset threshold 00:13:51.497 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:51.497 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:51.497 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:51.497 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:51.497 Cleaning up... 00:13:51.497 ************************************ 00:13:51.497 END TEST nvme_single_aen 00:13:51.497 ************************************ 00:13:51.497 00:13:51.497 real 0m0.312s 00:13:51.497 user 0m0.121s 00:13:51.497 sys 0m0.146s 00:13:51.497 10:19:45 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.497 10:19:45 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:13:51.497 10:19:45 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:13:51.497 10:19:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.497 10:19:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.497 10:19:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.497 ************************************ 00:13:51.497 START TEST nvme_doorbell_aers 00:13:51.497 ************************************ 00:13:51.497 10:19:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:13:51.497 10:19:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:51.498 10:19:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:51.498 [2024-11-25 10:19:45.797953] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:01.466 Executing: test_write_invalid_db 00:14:01.466 Waiting for AER completion... 00:14:01.466 Failure: test_write_invalid_db 00:14:01.466 00:14:01.466 Executing: test_invalid_db_write_overflow_sq 00:14:01.466 Waiting for AER completion... 00:14:01.466 Failure: test_invalid_db_write_overflow_sq 00:14:01.466 00:14:01.466 Executing: test_invalid_db_write_overflow_cq 00:14:01.466 Waiting for AER completion... 00:14:01.466 Failure: test_invalid_db_write_overflow_cq 00:14:01.466 00:14:01.466 10:19:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:01.466 10:19:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:01.724 [2024-11-25 10:19:55.860892] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:11.794 Executing: test_write_invalid_db 00:14:11.794 Waiting for AER completion... 00:14:11.794 Failure: test_write_invalid_db 00:14:11.794 00:14:11.794 Executing: test_invalid_db_write_overflow_sq 00:14:11.794 Waiting for AER completion... 00:14:11.794 Failure: test_invalid_db_write_overflow_sq 00:14:11.794 00:14:11.794 Executing: test_invalid_db_write_overflow_cq 00:14:11.794 Waiting for AER completion... 00:14:11.794 Failure: test_invalid_db_write_overflow_cq 00:14:11.794 00:14:11.794 10:20:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:11.794 10:20:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:11.794 [2024-11-25 10:20:05.901112] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:21.861 Executing: test_write_invalid_db 00:14:21.861 Waiting for AER completion... 00:14:21.861 Failure: test_write_invalid_db 00:14:21.861 00:14:21.861 Executing: test_invalid_db_write_overflow_sq 00:14:21.861 Waiting for AER completion... 00:14:21.861 Failure: test_invalid_db_write_overflow_sq 00:14:21.861 00:14:21.861 Executing: test_invalid_db_write_overflow_cq 00:14:21.861 Waiting for AER completion... 00:14:21.861 Failure: test_invalid_db_write_overflow_cq 00:14:21.861 00:14:21.861 10:20:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:21.861 10:20:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:21.861 [2024-11-25 10:20:15.974713] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.865 Executing: test_write_invalid_db 00:14:31.865 Waiting for AER completion... 00:14:31.866 Failure: test_write_invalid_db 00:14:31.866 00:14:31.866 Executing: test_invalid_db_write_overflow_sq 00:14:31.866 Waiting for AER completion... 00:14:31.866 Failure: test_invalid_db_write_overflow_sq 00:14:31.866 00:14:31.866 Executing: test_invalid_db_write_overflow_cq 00:14:31.866 Waiting for AER completion... 00:14:31.866 Failure: test_invalid_db_write_overflow_cq 00:14:31.866 00:14:31.866 00:14:31.866 real 0m40.285s 00:14:31.866 user 0m34.080s 00:14:31.866 sys 0m5.812s 00:14:31.866 10:20:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.866 10:20:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:14:31.866 ************************************ 00:14:31.866 END TEST nvme_doorbell_aers 00:14:31.866 ************************************ 00:14:31.866 10:20:25 nvme -- nvme/nvme.sh@97 -- # uname 00:14:31.866 10:20:25 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:14:31.866 10:20:25 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:31.866 10:20:25 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:31.866 10:20:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.866 10:20:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.866 ************************************ 00:14:31.866 START TEST nvme_multi_aen 00:14:31.866 ************************************ 00:14:31.866 10:20:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:31.866 [2024-11-25 10:20:26.079944] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.080056] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.080079] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.082217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.082525] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.082552] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.084406] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.084452] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.084470] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.086013] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.086061] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 [2024-11-25 10:20:26.086088] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64736) is not found. Dropping the request. 00:14:31.866 Child process pid: 65252 00:14:32.124 [Child] Asynchronous Event Request test 00:14:32.124 [Child] Attached to 0000:00:10.0 00:14:32.124 [Child] Attached to 0000:00:11.0 00:14:32.124 [Child] Attached to 0000:00:13.0 00:14:32.124 [Child] Attached to 0000:00:12.0 00:14:32.124 [Child] Registering asynchronous event callbacks... 00:14:32.124 [Child] Getting orig temperature thresholds of all controllers 00:14:32.124 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.124 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.124 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.124 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.124 [Child] Waiting for all controllers to trigger AER and reset threshold 00:14:32.124 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.124 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.124 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.124 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.124 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.124 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.124 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.124 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.124 [Child] Cleaning up... 00:14:32.382 Asynchronous Event Request test 00:14:32.382 Attached to 0000:00:10.0 00:14:32.382 Attached to 0000:00:11.0 00:14:32.382 Attached to 0000:00:13.0 00:14:32.382 Attached to 0000:00:12.0 00:14:32.382 Reset controller to setup AER completions for this process 00:14:32.382 Registering asynchronous event callbacks... 00:14:32.382 Getting orig temperature thresholds of all controllers 00:14:32.382 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.382 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.382 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.382 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.382 Setting all controllers temperature threshold low to trigger AER 00:14:32.382 Waiting for all controllers temperature threshold to be set lower 00:14:32.382 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.382 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:32.382 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.382 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:32.382 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.382 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:32.382 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.382 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:32.382 Waiting for all controllers to trigger AER and reset threshold 00:14:32.382 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.382 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.382 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.382 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.382 Cleaning up... 00:14:32.382 00:14:32.382 real 0m0.698s 00:14:32.382 user 0m0.250s 00:14:32.382 sys 0m0.340s 00:14:32.382 10:20:26 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.382 10:20:26 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:14:32.382 ************************************ 00:14:32.382 END TEST nvme_multi_aen 00:14:32.382 ************************************ 00:14:32.382 10:20:26 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:32.382 10:20:26 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:32.382 10:20:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.382 10:20:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.382 ************************************ 00:14:32.382 START TEST nvme_startup 00:14:32.382 ************************************ 00:14:32.382 10:20:26 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:32.641 Initializing NVMe Controllers 00:14:32.641 Attached to 0000:00:10.0 00:14:32.641 Attached to 0000:00:11.0 00:14:32.641 Attached to 0000:00:13.0 00:14:32.641 Attached to 0000:00:12.0 00:14:32.641 Initialization complete. 00:14:32.641 Time used:231726.766 (us). 00:14:32.641 ************************************ 00:14:32.641 END TEST nvme_startup 00:14:32.641 ************************************ 00:14:32.641 00:14:32.641 real 0m0.331s 00:14:32.641 user 0m0.107s 00:14:32.641 sys 0m0.170s 00:14:32.641 10:20:26 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.641 10:20:26 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:14:32.641 10:20:26 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:14:32.641 10:20:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.641 10:20:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.641 10:20:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.641 ************************************ 00:14:32.641 START TEST nvme_multi_secondary 00:14:32.641 ************************************ 00:14:32.641 10:20:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:14:32.641 10:20:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65308 00:14:32.641 10:20:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:14:32.641 10:20:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65309 00:14:32.641 10:20:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:14:32.641 10:20:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:35.955 Initializing NVMe Controllers 00:14:35.955 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:35.955 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:35.955 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:35.955 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:35.955 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:35.955 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:35.955 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:35.955 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:35.955 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:35.955 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:35.955 Initialization complete. Launching workers. 00:14:35.955 ======================================================== 00:14:35.955 Latency(us) 00:14:35.955 Device Information : IOPS MiB/s Average min max 00:14:35.955 PCIE (0000:00:10.0) NSID 1 from core 2: 2231.28 8.72 7168.15 1298.39 14585.87 00:14:35.955 PCIE (0000:00:11.0) NSID 1 from core 2: 2231.28 8.72 7170.58 1255.94 16803.71 00:14:35.955 PCIE (0000:00:13.0) NSID 1 from core 2: 2231.28 8.72 7170.32 1317.67 16847.05 00:14:35.955 PCIE (0000:00:12.0) NSID 1 from core 2: 2231.28 8.72 7169.59 1340.43 16464.67 00:14:35.955 PCIE (0000:00:12.0) NSID 2 from core 2: 2231.28 8.72 7179.49 1308.52 14286.60 00:14:35.955 PCIE (0000:00:12.0) NSID 3 from core 2: 2231.28 8.72 7178.68 1339.29 14125.99 00:14:35.955 ======================================================== 00:14:35.955 Total : 13387.71 52.30 7172.80 1255.94 16847.05 00:14:35.955 00:14:36.213 10:20:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65308 00:14:36.213 Initializing NVMe Controllers 00:14:36.213 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:36.213 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:36.213 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:36.213 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:36.213 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:36.213 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:36.213 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:36.213 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:36.213 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:36.213 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:36.213 Initialization complete. Launching workers. 00:14:36.213 ======================================================== 00:14:36.213 Latency(us) 00:14:36.213 Device Information : IOPS MiB/s Average min max 00:14:36.213 PCIE (0000:00:10.0) NSID 1 from core 1: 4882.80 19.07 3274.71 1268.96 17693.95 00:14:36.213 PCIE (0000:00:11.0) NSID 1 from core 1: 4882.80 19.07 3276.39 1306.58 18260.38 00:14:36.213 PCIE (0000:00:13.0) NSID 1 from core 1: 4882.80 19.07 3276.27 1310.82 18581.85 00:14:36.213 PCIE (0000:00:12.0) NSID 1 from core 1: 4882.80 19.07 3276.13 1314.23 18952.95 00:14:36.213 PCIE (0000:00:12.0) NSID 2 from core 1: 4882.80 19.07 3276.03 1317.50 18612.20 00:14:36.213 PCIE (0000:00:12.0) NSID 3 from core 1: 4882.80 19.07 3275.93 1295.69 17900.13 00:14:36.213 ======================================================== 00:14:36.213 Total : 29296.78 114.44 3275.91 1268.96 18952.95 00:14:36.213 00:14:38.201 Initializing NVMe Controllers 00:14:38.201 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:38.201 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:38.201 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:38.201 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:38.201 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:38.201 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:38.201 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:38.201 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:38.201 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:38.201 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:38.202 Initialization complete. Launching workers. 00:14:38.202 ======================================================== 00:14:38.202 Latency(us) 00:14:38.202 Device Information : IOPS MiB/s Average min max 00:14:38.202 PCIE (0000:00:10.0) NSID 1 from core 0: 7587.20 29.64 2107.24 940.55 6664.50 00:14:38.202 PCIE (0000:00:11.0) NSID 1 from core 0: 7587.20 29.64 2108.37 964.17 6549.02 00:14:38.202 PCIE (0000:00:13.0) NSID 1 from core 0: 7587.20 29.64 2108.32 971.35 7032.84 00:14:38.202 PCIE (0000:00:12.0) NSID 1 from core 0: 7587.20 29.64 2108.25 952.96 7013.37 00:14:38.202 PCIE (0000:00:12.0) NSID 2 from core 0: 7587.20 29.64 2108.21 929.28 6521.68 00:14:38.202 PCIE (0000:00:12.0) NSID 3 from core 0: 7587.20 29.64 2108.17 894.70 6321.51 00:14:38.202 ======================================================== 00:14:38.202 Total : 45523.18 177.82 2108.09 894.70 7032.84 00:14:38.202 00:14:38.202 10:20:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65309 00:14:38.202 10:20:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65378 00:14:38.202 10:20:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:14:38.202 10:20:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65379 00:14:38.202 10:20:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:38.202 10:20:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:14:41.509 Initializing NVMe Controllers 00:14:41.509 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:41.509 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:41.509 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:41.509 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:41.509 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:41.509 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:41.509 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:41.509 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:41.509 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:41.509 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:41.509 Initialization complete. Launching workers. 00:14:41.509 ======================================================== 00:14:41.509 Latency(us) 00:14:41.509 Device Information : IOPS MiB/s Average min max 00:14:41.509 PCIE (0000:00:10.0) NSID 1 from core 0: 5366.49 20.96 2979.54 1024.80 7209.47 00:14:41.509 PCIE (0000:00:11.0) NSID 1 from core 0: 5366.49 20.96 2981.53 1074.32 6353.92 00:14:41.509 PCIE (0000:00:13.0) NSID 1 from core 0: 5366.49 20.96 2981.67 1074.56 6955.94 00:14:41.509 PCIE (0000:00:12.0) NSID 1 from core 0: 5366.49 20.96 2981.67 1023.70 7130.74 00:14:41.509 PCIE (0000:00:12.0) NSID 2 from core 0: 5366.49 20.96 2981.61 1029.85 6425.67 00:14:41.509 PCIE (0000:00:12.0) NSID 3 from core 0: 5366.49 20.96 2981.58 1072.13 7638.75 00:14:41.509 ======================================================== 00:14:41.509 Total : 32198.93 125.78 2981.27 1023.70 7638.75 00:14:41.509 00:14:41.767 Initializing NVMe Controllers 00:14:41.767 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:41.767 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:41.767 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:41.767 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:41.767 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:41.767 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:41.767 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:41.767 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:41.767 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:41.767 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:41.767 Initialization complete. Launching workers. 00:14:41.767 ======================================================== 00:14:41.767 Latency(us) 00:14:41.767 Device Information : IOPS MiB/s Average min max 00:14:41.767 PCIE (0000:00:10.0) NSID 1 from core 1: 5198.36 20.31 3075.85 1222.73 15236.64 00:14:41.767 PCIE (0000:00:11.0) NSID 1 from core 1: 5198.36 20.31 3077.53 1282.65 15377.47 00:14:41.767 PCIE (0000:00:13.0) NSID 1 from core 1: 5198.36 20.31 3077.81 1288.38 15872.87 00:14:41.767 PCIE (0000:00:12.0) NSID 1 from core 1: 5198.36 20.31 3077.98 1263.84 14582.59 00:14:41.767 PCIE (0000:00:12.0) NSID 2 from core 1: 5198.36 20.31 3078.11 1243.42 14827.69 00:14:41.767 PCIE (0000:00:12.0) NSID 3 from core 1: 5198.36 20.31 3078.51 1142.79 15012.60 00:14:41.767 ======================================================== 00:14:41.767 Total : 31190.13 121.84 3077.63 1142.79 15872.87 00:14:41.767 00:14:44.300 Initializing NVMe Controllers 00:14:44.300 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:44.300 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:44.300 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:44.300 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:44.300 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:44.300 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:44.300 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:44.300 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:44.300 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:44.300 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:44.300 Initialization complete. Launching workers. 00:14:44.300 ======================================================== 00:14:44.300 Latency(us) 00:14:44.300 Device Information : IOPS MiB/s Average min max 00:14:44.300 PCIE (0000:00:10.0) NSID 1 from core 2: 3427.93 13.39 4664.04 1135.95 15215.00 00:14:44.300 PCIE (0000:00:11.0) NSID 1 from core 2: 3427.93 13.39 4664.24 1096.80 16649.12 00:14:44.300 PCIE (0000:00:13.0) NSID 1 from core 2: 3427.93 13.39 4663.21 1128.38 17227.79 00:14:44.300 PCIE (0000:00:12.0) NSID 1 from core 2: 3427.93 13.39 4662.65 1100.79 18118.90 00:14:44.300 PCIE (0000:00:12.0) NSID 2 from core 2: 3427.93 13.39 4663.08 1130.14 13664.62 00:14:44.300 PCIE (0000:00:12.0) NSID 3 from core 2: 3427.93 13.39 4662.35 1036.26 13822.99 00:14:44.300 ======================================================== 00:14:44.300 Total : 20567.55 80.34 4663.26 1036.26 18118.90 00:14:44.300 00:14:44.300 ************************************ 00:14:44.300 END TEST nvme_multi_secondary 00:14:44.300 ************************************ 00:14:44.300 10:20:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65378 00:14:44.300 10:20:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65379 00:14:44.300 00:14:44.300 real 0m11.238s 00:14:44.300 user 0m18.659s 00:14:44.300 sys 0m1.083s 00:14:44.300 10:20:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.300 10:20:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:14:44.300 10:20:38 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:14:44.300 10:20:38 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:14:44.300 10:20:38 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64294 ]] 00:14:44.300 10:20:38 nvme -- common/autotest_common.sh@1094 -- # kill 64294 00:14:44.300 10:20:38 nvme -- common/autotest_common.sh@1095 -- # wait 64294 00:14:44.300 [2024-11-25 10:20:38.206671] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.300 [2024-11-25 10:20:38.206836] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.300 [2024-11-25 10:20:38.206906] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.300 [2024-11-25 10:20:38.206947] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.300 [2024-11-25 10:20:38.211122] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.211242] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.211280] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.211327] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.215023] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.215080] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.215103] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.215125] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.217564] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.217819] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.217852] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 [2024-11-25 10:20:38.217875] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65251) is not found. Dropping the request. 00:14:44.301 10:20:38 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:14:44.301 10:20:38 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:14:44.301 10:20:38 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:44.301 10:20:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.301 10:20:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.301 10:20:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.301 ************************************ 00:14:44.301 START TEST bdev_nvme_reset_stuck_adm_cmd 00:14:44.301 ************************************ 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:44.301 * Looking for test storage... 00:14:44.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:44.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.301 --rc genhtml_branch_coverage=1 00:14:44.301 --rc genhtml_function_coverage=1 00:14:44.301 --rc genhtml_legend=1 00:14:44.301 --rc geninfo_all_blocks=1 00:14:44.301 --rc geninfo_unexecuted_blocks=1 00:14:44.301 00:14:44.301 ' 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:44.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.301 --rc genhtml_branch_coverage=1 00:14:44.301 --rc genhtml_function_coverage=1 00:14:44.301 --rc genhtml_legend=1 00:14:44.301 --rc geninfo_all_blocks=1 00:14:44.301 --rc geninfo_unexecuted_blocks=1 00:14:44.301 00:14:44.301 ' 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:44.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.301 --rc genhtml_branch_coverage=1 00:14:44.301 --rc genhtml_function_coverage=1 00:14:44.301 --rc genhtml_legend=1 00:14:44.301 --rc geninfo_all_blocks=1 00:14:44.301 --rc geninfo_unexecuted_blocks=1 00:14:44.301 00:14:44.301 ' 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:44.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.301 --rc genhtml_branch_coverage=1 00:14:44.301 --rc genhtml_function_coverage=1 00:14:44.301 --rc genhtml_legend=1 00:14:44.301 --rc geninfo_all_blocks=1 00:14:44.301 --rc geninfo_unexecuted_blocks=1 00:14:44.301 00:14:44.301 ' 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:14:44.301 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65551 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65551 00:14:44.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65551 ']' 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.560 10:20:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:44.560 [2024-11-25 10:20:38.849268] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:14:44.560 [2024-11-25 10:20:38.849677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65551 ] 00:14:44.818 [2024-11-25 10:20:39.076378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.076 [2024-11-25 10:20:39.264983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.076 [2024-11-25 10:20:39.265132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.076 [2024-11-25 10:20:39.265271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.076 [2024-11-25 10:20:39.265528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:46.012 nvme0n1 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_blHXV.txt 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:46.012 true 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.012 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:14:46.271 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732530040 00:14:46.271 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65575 00:14:46.271 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:14:46.271 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:46.271 10:20:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:48.175 [2024-11-25 10:20:42.355027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:14:48.175 [2024-11-25 10:20:42.355541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.175 [2024-11-25 10:20:42.355579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:48.175 [2024-11-25 10:20:42.355601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.175 [2024-11-25 10:20:42.358229] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:14:48.175 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65575 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65575 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65575 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_blHXV.txt 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_blHXV.txt 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65551 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65551 ']' 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65551 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.175 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65551 00:14:48.435 killing process with pid 65551 00:14:48.435 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.435 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.435 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65551' 00:14:48.435 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65551 00:14:48.435 10:20:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65551 00:14:50.968 10:20:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:14:50.968 10:20:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:14:50.968 ************************************ 00:14:50.968 END TEST bdev_nvme_reset_stuck_adm_cmd 00:14:50.968 ************************************ 00:14:50.968 00:14:50.968 real 0m6.595s 00:14:50.968 user 0m22.753s 00:14:50.968 sys 0m0.925s 00:14:50.968 10:20:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.968 10:20:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:50.968 10:20:45 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:14:50.968 10:20:45 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:14:50.968 10:20:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:50.968 10:20:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.968 10:20:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.968 ************************************ 00:14:50.968 START TEST nvme_fio 00:14:50.968 ************************************ 00:14:50.968 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:14:50.968 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:50.968 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:14:50.968 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:14:50.968 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:50.968 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:14:50.968 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:50.968 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:50.968 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:50.969 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:50.969 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:50.969 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:14:50.969 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:14:50.969 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:50.969 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:50.969 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:51.227 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:51.227 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:51.485 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:51.485 10:20:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:51.485 10:20:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:51.744 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:51.744 fio-3.35 00:14:51.744 Starting 1 thread 00:14:55.029 00:14:55.029 test: (groupid=0, jobs=1): err= 0: pid=65731: Mon Nov 25 10:20:49 2024 00:14:55.029 read: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(119MiB/2001msec) 00:14:55.029 slat (nsec): min=4623, max=63804, avg=6877.86, stdev=2551.20 00:14:55.029 clat (usec): min=300, max=9809, avg=4190.45, stdev=708.31 00:14:55.029 lat (usec): min=307, max=9847, avg=4197.32, stdev=709.25 00:14:55.029 clat percentiles (usec): 00:14:55.029 | 1.00th=[ 2802], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3687], 00:14:55.029 | 30.00th=[ 3818], 40.00th=[ 4047], 50.00th=[ 4228], 60.00th=[ 4293], 00:14:55.029 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 4948], 00:14:55.029 | 99.00th=[ 7504], 99.50th=[ 8225], 99.90th=[ 9110], 99.95th=[ 9241], 00:14:55.029 | 99.99th=[ 9634] 00:14:55.029 bw ( KiB/s): min=59368, max=64072, per=100.00%, avg=62133.33, stdev=2458.54, samples=3 00:14:55.029 iops : min=14842, max=16018, avg=15533.33, stdev=614.64, samples=3 00:14:55.029 write: IOPS=15.2k, BW=59.4MiB/s (62.3MB/s)(119MiB/2001msec); 0 zone resets 00:14:55.029 slat (nsec): min=4703, max=64655, avg=7048.18, stdev=2529.49 00:14:55.029 clat (usec): min=343, max=9715, avg=4201.08, stdev=708.17 00:14:55.029 lat (usec): min=351, max=9729, avg=4208.13, stdev=709.12 00:14:55.029 clat percentiles (usec): 00:14:55.029 | 1.00th=[ 2835], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3687], 00:14:55.029 | 30.00th=[ 3818], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4293], 00:14:55.029 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 4948], 00:14:55.029 | 99.00th=[ 7504], 99.50th=[ 8225], 99.90th=[ 9110], 99.95th=[ 9241], 00:14:55.029 | 99.99th=[ 9503] 00:14:55.029 bw ( KiB/s): min=58704, max=63344, per=100.00%, avg=61680.00, stdev=2583.29, samples=3 00:14:55.029 iops : min=14676, max=15836, avg=15420.00, stdev=645.82, samples=3 00:14:55.029 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:14:55.029 lat (msec) : 2=0.04%, 4=37.84%, 10=62.09% 00:14:55.029 cpu : usr=98.65%, sys=0.25%, ctx=4, majf=0, minf=607 00:14:55.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:55.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.030 issued rwts: total=30380,30444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.030 00:14:55.030 Run status group 0 (all jobs): 00:14:55.030 READ: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=119MiB (124MB), run=2001-2001msec 00:14:55.030 WRITE: bw=59.4MiB/s (62.3MB/s), 59.4MiB/s-59.4MiB/s (62.3MB/s-62.3MB/s), io=119MiB (125MB), run=2001-2001msec 00:14:55.030 ----------------------------------------------------- 00:14:55.030 Suppressions used: 00:14:55.030 count bytes template 00:14:55.030 1 32 /usr/src/fio/parse.c 00:14:55.030 1 8 libtcmalloc_minimal.so 00:14:55.030 ----------------------------------------------------- 00:14:55.030 00:14:55.030 10:20:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:55.030 10:20:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:55.030 10:20:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:55.030 10:20:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:55.596 10:20:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:55.596 10:20:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:55.854 10:20:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:55.854 10:20:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:55.854 10:20:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:55.854 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:55.854 fio-3.35 00:14:55.854 Starting 1 thread 00:14:59.138 00:14:59.138 test: (groupid=0, jobs=1): err= 0: pid=65792: Mon Nov 25 10:20:53 2024 00:14:59.138 read: IOPS=16.1k, BW=62.7MiB/s (65.7MB/s)(125MiB/2001msec) 00:14:59.138 slat (usec): min=4, max=102, avg= 6.52, stdev= 2.38 00:14:59.138 clat (usec): min=325, max=10945, avg=3964.74, stdev=550.74 00:14:59.138 lat (usec): min=331, max=11019, avg=3971.26, stdev=551.51 00:14:59.138 clat percentiles (usec): 00:14:59.138 | 1.00th=[ 3130], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3589], 00:14:59.138 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3884], 00:14:59.138 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 4752], 00:14:59.138 | 99.00th=[ 5735], 99.50th=[ 6587], 99.90th=[ 7898], 99.95th=[ 9372], 00:14:59.138 | 99.99th=[10683] 00:14:59.138 bw ( KiB/s): min=60552, max=68776, per=99.52%, avg=63898.67, stdev=4320.39, samples=3 00:14:59.138 iops : min=15138, max=17196, avg=15974.67, stdev=1081.58, samples=3 00:14:59.138 write: IOPS=16.1k, BW=62.8MiB/s (65.8MB/s)(126MiB/2001msec); 0 zone resets 00:14:59.138 slat (nsec): min=4677, max=86370, avg=6632.85, stdev=2309.21 00:14:59.138 clat (usec): min=297, max=10770, avg=3971.92, stdev=547.67 00:14:59.138 lat (usec): min=304, max=10789, avg=3978.56, stdev=548.41 00:14:59.138 clat percentiles (usec): 00:14:59.138 | 1.00th=[ 3163], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3589], 00:14:59.138 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3884], 00:14:59.138 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 4752], 00:14:59.138 | 99.00th=[ 5604], 99.50th=[ 6652], 99.90th=[ 8029], 99.95th=[ 9503], 00:14:59.138 | 99.99th=[10552] 00:14:59.138 bw ( KiB/s): min=60928, max=68104, per=98.90%, avg=63589.33, stdev=3930.63, samples=3 00:14:59.138 iops : min=15232, max=17026, avg=15897.33, stdev=982.66, samples=3 00:14:59.138 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:14:59.138 lat (msec) : 2=0.04%, 4=64.99%, 10=34.90%, 20=0.03% 00:14:59.138 cpu : usr=98.90%, sys=0.10%, ctx=13, majf=0, minf=607 00:14:59.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:59.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:59.138 issued rwts: total=32119,32165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.138 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:59.138 00:14:59.138 Run status group 0 (all jobs): 00:14:59.138 READ: bw=62.7MiB/s (65.7MB/s), 62.7MiB/s-62.7MiB/s (65.7MB/s-65.7MB/s), io=125MiB (132MB), run=2001-2001msec 00:14:59.138 WRITE: bw=62.8MiB/s (65.8MB/s), 62.8MiB/s-62.8MiB/s (65.8MB/s-65.8MB/s), io=126MiB (132MB), run=2001-2001msec 00:14:59.397 ----------------------------------------------------- 00:14:59.397 Suppressions used: 00:14:59.397 count bytes template 00:14:59.397 1 32 /usr/src/fio/parse.c 00:14:59.397 1 8 libtcmalloc_minimal.so 00:14:59.397 ----------------------------------------------------- 00:14:59.397 00:14:59.397 10:20:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:59.397 10:20:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:59.397 10:20:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:59.397 10:20:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:59.656 10:20:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:59.656 10:20:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:59.915 10:20:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:59.915 10:20:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:59.915 10:20:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:00.174 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:00.174 fio-3.35 00:15:00.174 Starting 1 thread 00:15:03.478 00:15:03.478 test: (groupid=0, jobs=1): err= 0: pid=65858: Mon Nov 25 10:20:57 2024 00:15:03.478 read: IOPS=15.0k, BW=58.7MiB/s (61.5MB/s)(117MiB/2001msec) 00:15:03.478 slat (nsec): min=4703, max=71959, avg=7148.84, stdev=2823.23 00:15:03.478 clat (usec): min=353, max=11862, avg=4234.68, stdev=688.36 00:15:03.478 lat (usec): min=360, max=11934, avg=4241.83, stdev=689.35 00:15:03.478 clat percentiles (usec): 00:15:03.478 | 1.00th=[ 3228], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3720], 00:15:03.478 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4228], 60.00th=[ 4359], 00:15:03.478 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 5080], 00:15:03.478 | 99.00th=[ 7046], 99.50th=[ 7242], 99.90th=[ 8586], 99.95th=[ 9634], 00:15:03.478 | 99.99th=[11600] 00:15:03.478 bw ( KiB/s): min=55536, max=65512, per=98.64%, avg=59277.33, stdev=5435.32, samples=3 00:15:03.478 iops : min=13884, max=16378, avg=14819.33, stdev=1358.83, samples=3 00:15:03.478 write: IOPS=15.0k, BW=58.7MiB/s (61.6MB/s)(117MiB/2001msec); 0 zone resets 00:15:03.478 slat (nsec): min=4743, max=67097, avg=7330.28, stdev=2994.40 00:15:03.478 clat (usec): min=291, max=11639, avg=4251.68, stdev=694.36 00:15:03.478 lat (usec): min=297, max=11656, avg=4259.01, stdev=695.36 00:15:03.478 clat percentiles (usec): 00:15:03.478 | 1.00th=[ 3294], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3752], 00:15:03.478 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4228], 60.00th=[ 4359], 00:15:03.478 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 5145], 00:15:03.478 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[ 9896], 00:15:03.478 | 99.99th=[11338] 00:15:03.478 bw ( KiB/s): min=55264, max=64872, per=98.31%, avg=59098.67, stdev=5088.93, samples=3 00:15:03.478 iops : min=13816, max=16218, avg=14774.67, stdev=1272.23, samples=3 00:15:03.478 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:03.478 lat (msec) : 2=0.05%, 4=42.36%, 10=57.51%, 20=0.05% 00:15:03.478 cpu : usr=98.75%, sys=0.15%, ctx=8, majf=0, minf=607 00:15:03.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:03.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:03.478 issued rwts: total=30061,30071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:03.478 00:15:03.478 Run status group 0 (all jobs): 00:15:03.478 READ: bw=58.7MiB/s (61.5MB/s), 58.7MiB/s-58.7MiB/s (61.5MB/s-61.5MB/s), io=117MiB (123MB), run=2001-2001msec 00:15:03.478 WRITE: bw=58.7MiB/s (61.6MB/s), 58.7MiB/s-58.7MiB/s (61.6MB/s-61.6MB/s), io=117MiB (123MB), run=2001-2001msec 00:15:03.478 ----------------------------------------------------- 00:15:03.478 Suppressions used: 00:15:03.478 count bytes template 00:15:03.478 1 32 /usr/src/fio/parse.c 00:15:03.478 1 8 libtcmalloc_minimal.so 00:15:03.478 ----------------------------------------------------- 00:15:03.478 00:15:03.478 10:20:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:03.478 10:20:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:03.478 10:20:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:03.478 10:20:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:03.736 10:20:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:03.737 10:20:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:03.995 10:20:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:03.995 10:20:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:03.995 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:03.995 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:03.995 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:03.995 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:03.995 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:03.995 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:03.995 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:03.995 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.254 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.254 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:04.254 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.254 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:04.254 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:04.254 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:04.254 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:04.254 10:20:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:04.254 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:04.254 fio-3.35 00:15:04.254 Starting 1 thread 00:15:07.539 00:15:07.539 test: (groupid=0, jobs=1): err= 0: pid=65919: Mon Nov 25 10:21:01 2024 00:15:07.539 read: IOPS=16.1k, BW=63.1MiB/s (66.1MB/s)(126MiB/2005msec) 00:15:07.539 slat (nsec): min=4720, max=70579, avg=6362.45, stdev=2221.65 00:15:07.539 clat (usec): min=1015, max=8403, avg=3300.53, stdev=900.69 00:15:07.539 lat (usec): min=1020, max=8410, avg=3306.90, stdev=900.94 00:15:07.539 clat percentiles (usec): 00:15:07.539 | 1.00th=[ 1663], 5.00th=[ 1811], 10.00th=[ 1926], 20.00th=[ 2212], 00:15:07.539 | 30.00th=[ 2933], 40.00th=[ 3523], 50.00th=[ 3621], 60.00th=[ 3687], 00:15:07.539 | 70.00th=[ 3752], 80.00th=[ 3851], 90.00th=[ 4015], 95.00th=[ 4555], 00:15:07.539 | 99.00th=[ 5145], 99.50th=[ 6587], 99.90th=[ 7701], 99.95th=[ 7832], 00:15:07.539 | 99.99th=[ 8094] 00:15:07.539 bw ( KiB/s): min=62752, max=68064, per=100.00%, avg=64688.00, stdev=2350.14, samples=4 00:15:07.539 iops : min=15688, max=17016, avg=16172.00, stdev=587.53, samples=4 00:15:07.539 write: IOPS=16.2k, BW=63.2MiB/s (66.2MB/s)(127MiB/2005msec); 0 zone resets 00:15:07.539 slat (usec): min=5, max=128, avg= 6.68, stdev= 2.34 00:15:07.540 clat (usec): min=1244, max=21741, avg=4590.21, stdev=2881.89 00:15:07.540 lat (usec): min=1251, max=21747, avg=4596.88, stdev=2881.89 00:15:07.540 clat percentiles (usec): 00:15:07.540 | 1.00th=[ 1745], 5.00th=[ 1958], 10.00th=[ 2212], 20.00th=[ 3490], 00:15:07.540 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3818], 00:15:07.540 | 70.00th=[ 3949], 80.00th=[ 4686], 90.00th=[ 8455], 95.00th=[10421], 00:15:07.540 | 99.00th=[17433], 99.50th=[19268], 99.90th=[20579], 99.95th=[20841], 00:15:07.540 | 99.99th=[21103] 00:15:07.540 bw ( KiB/s): min=62264, max=67688, per=99.96%, avg=64650.00, stdev=2284.08, samples=4 00:15:07.540 iops : min=15566, max=16922, avg=16162.50, stdev=571.02, samples=4 00:15:07.540 lat (msec) : 2=9.65%, 4=71.15%, 10=16.01%, 20=3.05%, 50=0.14% 00:15:07.540 cpu : usr=99.05%, sys=0.20%, ctx=4, majf=0, minf=605 00:15:07.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:07.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.540 issued rwts: total=32377,32419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.540 00:15:07.540 Run status group 0 (all jobs): 00:15:07.540 READ: bw=63.1MiB/s (66.1MB/s), 63.1MiB/s-63.1MiB/s (66.1MB/s-66.1MB/s), io=126MiB (133MB), run=2005-2005msec 00:15:07.540 WRITE: bw=63.2MiB/s (66.2MB/s), 63.2MiB/s-63.2MiB/s (66.2MB/s-66.2MB/s), io=127MiB (133MB), run=2005-2005msec 00:15:07.799 ----------------------------------------------------- 00:15:07.799 Suppressions used: 00:15:07.799 count bytes template 00:15:07.799 1 32 /usr/src/fio/parse.c 00:15:07.799 1 8 libtcmalloc_minimal.so 00:15:07.799 ----------------------------------------------------- 00:15:07.799 00:15:07.799 10:21:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:07.799 10:21:02 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:15:07.799 00:15:07.799 real 0m17.008s 00:15:07.799 user 0m13.527s 00:15:07.799 sys 0m1.918s 00:15:07.799 10:21:02 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.799 10:21:02 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:15:07.799 ************************************ 00:15:07.799 END TEST nvme_fio 00:15:07.799 ************************************ 00:15:08.057 00:15:08.057 real 1m34.386s 00:15:08.057 user 3m51.132s 00:15:08.057 sys 0m16.679s 00:15:08.057 10:21:02 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.057 10:21:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.057 ************************************ 00:15:08.057 END TEST nvme 00:15:08.057 ************************************ 00:15:08.057 10:21:02 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:15:08.057 10:21:02 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:08.057 10:21:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.057 10:21:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.057 10:21:02 -- common/autotest_common.sh@10 -- # set +x 00:15:08.057 ************************************ 00:15:08.058 START TEST nvme_scc 00:15:08.058 ************************************ 00:15:08.058 10:21:02 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:08.058 * Looking for test storage... 00:15:08.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:08.058 10:21:02 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:08.058 10:21:02 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:08.058 10:21:02 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:08.317 10:21:02 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@345 -- # : 1 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@368 -- # return 0 00:15:08.317 10:21:02 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.317 10:21:02 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.317 --rc genhtml_branch_coverage=1 00:15:08.317 --rc genhtml_function_coverage=1 00:15:08.317 --rc genhtml_legend=1 00:15:08.317 --rc geninfo_all_blocks=1 00:15:08.317 --rc geninfo_unexecuted_blocks=1 00:15:08.317 00:15:08.317 ' 00:15:08.317 10:21:02 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.317 --rc genhtml_branch_coverage=1 00:15:08.317 --rc genhtml_function_coverage=1 00:15:08.317 --rc genhtml_legend=1 00:15:08.317 --rc geninfo_all_blocks=1 00:15:08.317 --rc geninfo_unexecuted_blocks=1 00:15:08.317 00:15:08.317 ' 00:15:08.317 10:21:02 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.317 --rc genhtml_branch_coverage=1 00:15:08.317 --rc genhtml_function_coverage=1 00:15:08.317 --rc genhtml_legend=1 00:15:08.317 --rc geninfo_all_blocks=1 00:15:08.317 --rc geninfo_unexecuted_blocks=1 00:15:08.317 00:15:08.317 ' 00:15:08.317 10:21:02 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:08.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.317 --rc genhtml_branch_coverage=1 00:15:08.317 --rc genhtml_function_coverage=1 00:15:08.317 --rc genhtml_legend=1 00:15:08.317 --rc geninfo_all_blocks=1 00:15:08.317 --rc geninfo_unexecuted_blocks=1 00:15:08.317 00:15:08.317 ' 00:15:08.317 10:21:02 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:08.317 10:21:02 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:08.317 10:21:02 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:08.317 10:21:02 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:08.317 10:21:02 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.317 10:21:02 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.318 10:21:02 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.318 10:21:02 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.318 10:21:02 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.318 10:21:02 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.318 10:21:02 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.318 10:21:02 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.318 10:21:02 nvme_scc -- paths/export.sh@5 -- # export PATH 00:15:08.318 10:21:02 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.318 10:21:02 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:15:08.318 10:21:02 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:08.318 10:21:02 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:15:08.318 10:21:02 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:08.318 10:21:02 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:15:08.318 10:21:02 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:08.318 10:21:02 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:08.318 10:21:02 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:08.318 10:21:02 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:15:08.318 10:21:02 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.318 10:21:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:15:08.318 10:21:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:15:08.318 10:21:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:15:08.318 10:21:02 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:08.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:08.838 Waiting for block devices as requested 00:15:08.838 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:08.838 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:09.104 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:09.104 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:14.378 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:14.378 10:21:08 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:14.378 10:21:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:14.378 10:21:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:14.378 10:21:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:14.378 10:21:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.378 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:14.379 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:14.380 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.381 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:14.382 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:14.383 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:14.384 10:21:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:14.384 10:21:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:14.384 10:21:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:14.384 10:21:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:14.384 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.385 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:14.386 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:15:14.387 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.388 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.389 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:14.390 10:21:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:14.390 10:21:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:14.390 10:21:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:14.390 10:21:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.390 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:14.391 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:14.392 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.658 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.659 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.660 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:15:14.661 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.662 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.663 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.664 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:14.665 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.666 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.667 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.668 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.669 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:14.670 10:21:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:14.670 10:21:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:14.670 10:21:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:14.670 10:21:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:14.670 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:14.671 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.672 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:14.932 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:14.933 10:21:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:15:14.933 10:21:08 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:15:14.933 10:21:09 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:15:14.933 10:21:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:15:14.933 10:21:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:15:14.933 10:21:09 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:15.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:15.759 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:15.759 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:15.759 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:16.019 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:16.019 10:21:10 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:16.019 10:21:10 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:16.019 10:21:10 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.019 10:21:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:16.019 ************************************ 00:15:16.019 START TEST nvme_simple_copy 00:15:16.019 ************************************ 00:15:16.019 10:21:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:16.278 Initializing NVMe Controllers 00:15:16.278 Attaching to 0000:00:10.0 00:15:16.278 Controller supports SCC. Attached to 0000:00:10.0 00:15:16.278 Namespace ID: 1 size: 6GB 00:15:16.278 Initialization complete. 00:15:16.278 00:15:16.278 Controller QEMU NVMe Ctrl (12340 ) 00:15:16.278 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:15:16.278 Namespace Block Size:4096 00:15:16.278 Writing LBAs 0 to 63 with Random Data 00:15:16.278 Copied LBAs from 0 - 63 to the Destination LBA 256 00:15:16.278 LBAs matching Written Data: 64 00:15:16.278 00:15:16.278 real 0m0.339s 00:15:16.278 user 0m0.134s 00:15:16.278 sys 0m0.101s 00:15:16.278 10:21:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.278 10:21:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:15:16.278 ************************************ 00:15:16.278 END TEST nvme_simple_copy 00:15:16.278 ************************************ 00:15:16.278 00:15:16.278 real 0m8.389s 00:15:16.278 user 0m1.552s 00:15:16.278 sys 0m1.823s 00:15:16.278 10:21:10 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.278 10:21:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:16.278 ************************************ 00:15:16.278 END TEST nvme_scc 00:15:16.278 ************************************ 00:15:16.537 10:21:10 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:15:16.537 10:21:10 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:15:16.537 10:21:10 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:15:16.537 10:21:10 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:15:16.537 10:21:10 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:15:16.537 10:21:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.537 10:21:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.537 10:21:10 -- common/autotest_common.sh@10 -- # set +x 00:15:16.537 ************************************ 00:15:16.537 START TEST nvme_fdp 00:15:16.537 ************************************ 00:15:16.537 10:21:10 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:15:16.537 * Looking for test storage... 00:15:16.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:16.537 10:21:10 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:16.537 10:21:10 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:15:16.537 10:21:10 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:16.537 10:21:10 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.537 10:21:10 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:15:16.537 10:21:10 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.537 10:21:10 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:16.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.537 --rc genhtml_branch_coverage=1 00:15:16.537 --rc genhtml_function_coverage=1 00:15:16.537 --rc genhtml_legend=1 00:15:16.537 --rc geninfo_all_blocks=1 00:15:16.537 --rc geninfo_unexecuted_blocks=1 00:15:16.537 00:15:16.537 ' 00:15:16.538 10:21:10 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:16.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.538 --rc genhtml_branch_coverage=1 00:15:16.538 --rc genhtml_function_coverage=1 00:15:16.538 --rc genhtml_legend=1 00:15:16.538 --rc geninfo_all_blocks=1 00:15:16.538 --rc geninfo_unexecuted_blocks=1 00:15:16.538 00:15:16.538 ' 00:15:16.538 10:21:10 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:16.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.538 --rc genhtml_branch_coverage=1 00:15:16.538 --rc genhtml_function_coverage=1 00:15:16.538 --rc genhtml_legend=1 00:15:16.538 --rc geninfo_all_blocks=1 00:15:16.538 --rc geninfo_unexecuted_blocks=1 00:15:16.538 00:15:16.538 ' 00:15:16.538 10:21:10 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:16.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.538 --rc genhtml_branch_coverage=1 00:15:16.538 --rc genhtml_function_coverage=1 00:15:16.538 --rc genhtml_legend=1 00:15:16.538 --rc geninfo_all_blocks=1 00:15:16.538 --rc geninfo_unexecuted_blocks=1 00:15:16.538 00:15:16.538 ' 00:15:16.538 10:21:10 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.538 10:21:10 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.538 10:21:10 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.538 10:21:10 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.538 10:21:10 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.538 10:21:10 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.538 10:21:10 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.538 10:21:10 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.538 10:21:10 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:15:16.538 10:21:10 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:16.538 10:21:10 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:15:16.538 10:21:10 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.538 10:21:10 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:17.105 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:17.105 Waiting for block devices as requested 00:15:17.105 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:17.365 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:17.365 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:17.365 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:22.651 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:22.652 10:21:16 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:22.652 10:21:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:22.652 10:21:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:22.652 10:21:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:22.652 10:21:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:22.652 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.653 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.654 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:15:22.655 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.656 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.657 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:22.658 10:21:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:22.658 10:21:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:22.658 10:21:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:22.658 10:21:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.658 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.659 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.660 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:22.660 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:22.660 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.660 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.660 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.946 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:22.947 10:21:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:15:22.948 10:21:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.948 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.949 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.950 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:22.951 10:21:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:22.951 10:21:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:22.951 10:21:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:22.951 10:21:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.951 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.952 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:22.953 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.954 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.955 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.956 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:15:22.957 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.958 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:22.959 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:22.960 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.961 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.222 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.223 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:23.224 10:21:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:23.224 10:21:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:23.224 10:21:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:23.224 10:21:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:23.224 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:23.225 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:23.226 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:23.227 10:21:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:23.227 10:21:17 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:15:23.228 10:21:17 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:15:23.228 10:21:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:15:23.228 10:21:17 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:15:23.228 10:21:17 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:23.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:24.359 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.359 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.359 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.359 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.359 10:21:18 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:24.359 10:21:18 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:24.359 10:21:18 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.359 10:21:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:24.359 ************************************ 00:15:24.359 START TEST nvme_flexible_data_placement 00:15:24.359 ************************************ 00:15:24.359 10:21:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:24.930 Initializing NVMe Controllers 00:15:24.930 Attaching to 0000:00:13.0 00:15:24.930 Controller supports FDP Attached to 0000:00:13.0 00:15:24.930 Namespace ID: 1 Endurance Group ID: 1 00:15:24.930 Initialization complete. 00:15:24.930 00:15:24.930 ================================== 00:15:24.930 == FDP tests for Namespace: #01 == 00:15:24.930 ================================== 00:15:24.930 00:15:24.930 Get Feature: FDP: 00:15:24.930 ================= 00:15:24.930 Enabled: Yes 00:15:24.930 FDP configuration Index: 0 00:15:24.930 00:15:24.930 FDP configurations log page 00:15:24.930 =========================== 00:15:24.930 Number of FDP configurations: 1 00:15:24.930 Version: 0 00:15:24.930 Size: 112 00:15:24.930 FDP Configuration Descriptor: 0 00:15:24.930 Descriptor Size: 96 00:15:24.930 Reclaim Group Identifier format: 2 00:15:24.930 FDP Volatile Write Cache: Not Present 00:15:24.930 FDP Configuration: Valid 00:15:24.930 Vendor Specific Size: 0 00:15:24.930 Number of Reclaim Groups: 2 00:15:24.930 Number of Recalim Unit Handles: 8 00:15:24.930 Max Placement Identifiers: 128 00:15:24.930 Number of Namespaces Suppprted: 256 00:15:24.930 Reclaim unit Nominal Size: 6000000 bytes 00:15:24.930 Estimated Reclaim Unit Time Limit: Not Reported 00:15:24.930 RUH Desc #000: RUH Type: Initially Isolated 00:15:24.930 RUH Desc #001: RUH Type: Initially Isolated 00:15:24.930 RUH Desc #002: RUH Type: Initially Isolated 00:15:24.930 RUH Desc #003: RUH Type: Initially Isolated 00:15:24.930 RUH Desc #004: RUH Type: Initially Isolated 00:15:24.930 RUH Desc #005: RUH Type: Initially Isolated 00:15:24.930 RUH Desc #006: RUH Type: Initially Isolated 00:15:24.930 RUH Desc #007: RUH Type: Initially Isolated 00:15:24.930 00:15:24.930 FDP reclaim unit handle usage log page 00:15:24.930 ====================================== 00:15:24.930 Number of Reclaim Unit Handles: 8 00:15:24.930 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:24.930 RUH Usage Desc #001: RUH Attributes: Unused 00:15:24.930 RUH Usage Desc #002: RUH Attributes: Unused 00:15:24.930 RUH Usage Desc #003: RUH Attributes: Unused 00:15:24.930 RUH Usage Desc #004: RUH Attributes: Unused 00:15:24.930 RUH Usage Desc #005: RUH Attributes: Unused 00:15:24.930 RUH Usage Desc #006: RUH Attributes: Unused 00:15:24.930 RUH Usage Desc #007: RUH Attributes: Unused 00:15:24.930 00:15:24.930 FDP statistics log page 00:15:24.930 ======================= 00:15:24.930 Host bytes with metadata written: 883286016 00:15:24.930 Media bytes with metadata written: 883404800 00:15:24.930 Media bytes erased: 0 00:15:24.930 00:15:24.930 FDP Reclaim unit handle status 00:15:24.930 ============================== 00:15:24.930 Number of RUHS descriptors: 2 00:15:24.930 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000015a2 00:15:24.930 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:15:24.930 00:15:24.930 FDP write on placement id: 0 success 00:15:24.930 00:15:24.930 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:15:24.930 00:15:24.930 IO mgmt send: RUH update for Placement ID: #0 Success 00:15:24.930 00:15:24.930 Get Feature: FDP Events for Placement handle: #0 00:15:24.930 ======================== 00:15:24.930 Number of FDP Events: 6 00:15:24.930 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:15:24.930 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:15:24.930 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:15:24.930 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:15:24.930 FDP Event: #4 Type: Media Reallocated Enabled: No 00:15:24.930 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:15:24.930 00:15:24.930 FDP events log page 00:15:24.930 =================== 00:15:24.930 Number of FDP events: 1 00:15:24.930 FDP Event #0: 00:15:24.930 Event Type: RU Not Written to Capacity 00:15:24.931 Placement Identifier: Valid 00:15:24.931 NSID: Valid 00:15:24.931 Location: Valid 00:15:24.931 Placement Identifier: 0 00:15:24.931 Event Timestamp: 7 00:15:24.931 Namespace Identifier: 1 00:15:24.931 Reclaim Group Identifier: 0 00:15:24.931 Reclaim Unit Handle Identifier: 0 00:15:24.931 00:15:24.931 FDP test passed 00:15:24.931 00:15:24.931 real 0m0.309s 00:15:24.931 user 0m0.112s 00:15:24.931 sys 0m0.095s 00:15:24.931 10:21:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.931 ************************************ 00:15:24.931 END TEST nvme_flexible_data_placement 00:15:24.931 ************************************ 00:15:24.931 10:21:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:15:24.931 00:15:24.931 real 0m8.370s 00:15:24.931 user 0m1.557s 00:15:24.931 sys 0m1.818s 00:15:24.931 10:21:19 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.931 ************************************ 00:15:24.931 END TEST nvme_fdp 00:15:24.931 ************************************ 00:15:24.931 10:21:19 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:24.931 10:21:19 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:15:24.931 10:21:19 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:24.931 10:21:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:24.931 10:21:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.931 10:21:19 -- common/autotest_common.sh@10 -- # set +x 00:15:24.931 ************************************ 00:15:24.931 START TEST nvme_rpc 00:15:24.931 ************************************ 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:24.931 * Looking for test storage... 00:15:24.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.931 10:21:19 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:24.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.931 --rc genhtml_branch_coverage=1 00:15:24.931 --rc genhtml_function_coverage=1 00:15:24.931 --rc genhtml_legend=1 00:15:24.931 --rc geninfo_all_blocks=1 00:15:24.931 --rc geninfo_unexecuted_blocks=1 00:15:24.931 00:15:24.931 ' 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:24.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.931 --rc genhtml_branch_coverage=1 00:15:24.931 --rc genhtml_function_coverage=1 00:15:24.931 --rc genhtml_legend=1 00:15:24.931 --rc geninfo_all_blocks=1 00:15:24.931 --rc geninfo_unexecuted_blocks=1 00:15:24.931 00:15:24.931 ' 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:24.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.931 --rc genhtml_branch_coverage=1 00:15:24.931 --rc genhtml_function_coverage=1 00:15:24.931 --rc genhtml_legend=1 00:15:24.931 --rc geninfo_all_blocks=1 00:15:24.931 --rc geninfo_unexecuted_blocks=1 00:15:24.931 00:15:24.931 ' 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:24.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.931 --rc genhtml_branch_coverage=1 00:15:24.931 --rc genhtml_function_coverage=1 00:15:24.931 --rc genhtml_legend=1 00:15:24.931 --rc geninfo_all_blocks=1 00:15:24.931 --rc geninfo_unexecuted_blocks=1 00:15:24.931 00:15:24.931 ' 00:15:24.931 10:21:19 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.931 10:21:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:15:24.931 10:21:19 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:15:25.219 10:21:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:15:25.219 10:21:19 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67316 00:15:25.219 10:21:19 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:25.219 10:21:19 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:15:25.219 10:21:19 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67316 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67316 ']' 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.219 10:21:19 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.219 [2024-11-25 10:21:19.472697] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:15:25.219 [2024-11-25 10:21:19.472901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67316 ] 00:15:25.477 [2024-11-25 10:21:19.667508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:25.735 [2024-11-25 10:21:19.826683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.735 [2024-11-25 10:21:19.826739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.670 10:21:20 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.670 10:21:20 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:26.670 10:21:20 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:15:26.928 Nvme0n1 00:15:26.928 10:21:21 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:15:26.928 10:21:21 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:15:27.187 request: 00:15:27.187 { 00:15:27.187 "bdev_name": "Nvme0n1", 00:15:27.187 "filename": "non_existing_file", 00:15:27.187 "method": "bdev_nvme_apply_firmware", 00:15:27.187 "req_id": 1 00:15:27.187 } 00:15:27.187 Got JSON-RPC error response 00:15:27.187 response: 00:15:27.187 { 00:15:27.187 "code": -32603, 00:15:27.187 "message": "open file failed." 00:15:27.187 } 00:15:27.187 10:21:21 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:15:27.187 10:21:21 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:15:27.187 10:21:21 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:27.447 10:21:21 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:27.447 10:21:21 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67316 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67316 ']' 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67316 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67316 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.447 killing process with pid 67316 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67316' 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67316 00:15:27.447 10:21:21 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67316 00:15:29.977 ************************************ 00:15:29.977 END TEST nvme_rpc 00:15:29.977 ************************************ 00:15:29.977 00:15:29.977 real 0m4.689s 00:15:29.977 user 0m8.965s 00:15:29.977 sys 0m0.787s 00:15:29.977 10:21:23 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.977 10:21:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.977 10:21:23 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:29.977 10:21:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:29.977 10:21:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.977 10:21:23 -- common/autotest_common.sh@10 -- # set +x 00:15:29.977 ************************************ 00:15:29.977 START TEST nvme_rpc_timeouts 00:15:29.977 ************************************ 00:15:29.977 10:21:23 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:29.977 * Looking for test storage... 00:15:29.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:29.977 10:21:23 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:29.977 10:21:23 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:15:29.977 10:21:23 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:29.977 10:21:23 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:29.977 10:21:23 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.977 10:21:23 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.977 10:21:23 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.977 10:21:23 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.977 10:21:23 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.977 10:21:24 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:15:29.977 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.977 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:29.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.977 --rc genhtml_branch_coverage=1 00:15:29.977 --rc genhtml_function_coverage=1 00:15:29.977 --rc genhtml_legend=1 00:15:29.977 --rc geninfo_all_blocks=1 00:15:29.977 --rc geninfo_unexecuted_blocks=1 00:15:29.978 00:15:29.978 ' 00:15:29.978 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:29.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.978 --rc genhtml_branch_coverage=1 00:15:29.978 --rc genhtml_function_coverage=1 00:15:29.978 --rc genhtml_legend=1 00:15:29.978 --rc geninfo_all_blocks=1 00:15:29.978 --rc geninfo_unexecuted_blocks=1 00:15:29.978 00:15:29.978 ' 00:15:29.978 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:29.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.978 --rc genhtml_branch_coverage=1 00:15:29.978 --rc genhtml_function_coverage=1 00:15:29.978 --rc genhtml_legend=1 00:15:29.978 --rc geninfo_all_blocks=1 00:15:29.978 --rc geninfo_unexecuted_blocks=1 00:15:29.978 00:15:29.978 ' 00:15:29.978 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:29.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.978 --rc genhtml_branch_coverage=1 00:15:29.978 --rc genhtml_function_coverage=1 00:15:29.978 --rc genhtml_legend=1 00:15:29.978 --rc geninfo_all_blocks=1 00:15:29.978 --rc geninfo_unexecuted_blocks=1 00:15:29.978 00:15:29.978 ' 00:15:29.978 10:21:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.978 10:21:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67392 00:15:29.978 10:21:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67392 00:15:29.978 10:21:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67425 00:15:29.978 10:21:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:29.978 10:21:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:15:29.978 10:21:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67425 00:15:29.978 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67425 ']' 00:15:29.978 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.978 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.978 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.978 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.978 10:21:24 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:29.978 [2024-11-25 10:21:24.156624] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:15:29.978 [2024-11-25 10:21:24.156836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67425 ] 00:15:30.261 [2024-11-25 10:21:24.352005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:30.261 [2024-11-25 10:21:24.514057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.261 [2024-11-25 10:21:24.514124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.242 10:21:25 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.242 10:21:25 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:15:31.242 Checking default timeout settings: 00:15:31.242 10:21:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:15:31.242 10:21:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:31.500 Making settings changes with rpc: 00:15:31.500 10:21:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:15:31.500 10:21:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:15:31.758 Check default vs. modified settings: 00:15:31.758 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:15:31.758 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67392 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67392 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:15:32.324 Setting action_on_timeout is changed as expected. 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67392 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67392 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:15:32.324 Setting timeout_us is changed as expected. 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:32.324 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67392 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67392 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:15:32.325 Setting timeout_admin_us is changed as expected. 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67392 /tmp/settings_modified_67392 00:15:32.325 10:21:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67425 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67425 ']' 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67425 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67425 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.325 killing process with pid 67425 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67425' 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67425 00:15:32.325 10:21:26 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67425 00:15:34.855 RPC TIMEOUT SETTING TEST PASSED. 00:15:34.855 10:21:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:15:34.855 00:15:34.855 real 0m5.160s 00:15:34.855 user 0m9.865s 00:15:34.855 sys 0m0.833s 00:15:34.855 10:21:28 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.855 ************************************ 00:15:34.855 END TEST nvme_rpc_timeouts 00:15:34.855 ************************************ 00:15:34.855 10:21:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:34.855 10:21:29 -- spdk/autotest.sh@239 -- # uname -s 00:15:34.855 10:21:29 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:15:34.855 10:21:29 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:34.855 10:21:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.855 10:21:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.855 10:21:29 -- common/autotest_common.sh@10 -- # set +x 00:15:34.855 ************************************ 00:15:34.855 START TEST sw_hotplug 00:15:34.855 ************************************ 00:15:34.855 10:21:29 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:34.855 * Looking for test storage... 00:15:34.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:34.855 10:21:29 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:34.855 10:21:29 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:15:34.855 10:21:29 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:35.113 10:21:29 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.113 10:21:29 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:15:35.113 10:21:29 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.113 10:21:29 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.113 --rc genhtml_branch_coverage=1 00:15:35.113 --rc genhtml_function_coverage=1 00:15:35.113 --rc genhtml_legend=1 00:15:35.113 --rc geninfo_all_blocks=1 00:15:35.113 --rc geninfo_unexecuted_blocks=1 00:15:35.113 00:15:35.113 ' 00:15:35.113 10:21:29 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.113 --rc genhtml_branch_coverage=1 00:15:35.113 --rc genhtml_function_coverage=1 00:15:35.113 --rc genhtml_legend=1 00:15:35.113 --rc geninfo_all_blocks=1 00:15:35.113 --rc geninfo_unexecuted_blocks=1 00:15:35.113 00:15:35.113 ' 00:15:35.113 10:21:29 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.113 --rc genhtml_branch_coverage=1 00:15:35.113 --rc genhtml_function_coverage=1 00:15:35.113 --rc genhtml_legend=1 00:15:35.113 --rc geninfo_all_blocks=1 00:15:35.113 --rc geninfo_unexecuted_blocks=1 00:15:35.113 00:15:35.113 ' 00:15:35.113 10:21:29 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.113 --rc genhtml_branch_coverage=1 00:15:35.113 --rc genhtml_function_coverage=1 00:15:35.113 --rc genhtml_legend=1 00:15:35.113 --rc geninfo_all_blocks=1 00:15:35.113 --rc geninfo_unexecuted_blocks=1 00:15:35.113 00:15:35.113 ' 00:15:35.113 10:21:29 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:35.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:35.631 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:35.631 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:35.631 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:35.631 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:35.631 10:21:29 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:15:35.631 10:21:29 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:15:35.631 10:21:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:15:35.631 10:21:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@233 -- # local class 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:35.631 10:21:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:15:35.632 10:21:29 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:35.632 10:21:29 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:15:35.632 10:21:29 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:15:35.632 10:21:29 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:35.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:36.148 Waiting for block devices as requested 00:15:36.148 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:36.407 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:36.407 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:36.407 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:41.746 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:41.746 10:21:35 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:15:41.746 10:21:35 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:42.005 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:15:42.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:42.005 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:15:42.572 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:15:42.572 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:42.572 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:42.831 10:21:36 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:15:42.831 10:21:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68305 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:15:42.831 10:21:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:42.831 10:21:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:42.831 10:21:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:42.831 10:21:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:42.831 10:21:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:42.831 10:21:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:43.090 Initializing NVMe Controllers 00:15:43.090 Attaching to 0000:00:10.0 00:15:43.090 Attaching to 0000:00:11.0 00:15:43.090 Attached to 0000:00:11.0 00:15:43.090 Attached to 0000:00:10.0 00:15:43.090 Initialization complete. Starting I/O... 00:15:43.090 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:15:43.090 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:15:43.090 00:15:44.026 QEMU NVMe Ctrl (12341 ): 1160 I/Os completed (+1160) 00:15:44.026 QEMU NVMe Ctrl (12340 ): 1200 I/Os completed (+1200) 00:15:44.026 00:15:45.026 QEMU NVMe Ctrl (12341 ): 2644 I/Os completed (+1484) 00:15:45.026 QEMU NVMe Ctrl (12340 ): 2702 I/Os completed (+1502) 00:15:45.026 00:15:46.401 QEMU NVMe Ctrl (12341 ): 4144 I/Os completed (+1500) 00:15:46.401 QEMU NVMe Ctrl (12340 ): 4209 I/Os completed (+1507) 00:15:46.401 00:15:47.332 QEMU NVMe Ctrl (12341 ): 5820 I/Os completed (+1676) 00:15:47.332 QEMU NVMe Ctrl (12340 ): 5922 I/Os completed (+1713) 00:15:47.332 00:15:48.265 QEMU NVMe Ctrl (12341 ): 7468 I/Os completed (+1648) 00:15:48.265 QEMU NVMe Ctrl (12340 ): 7616 I/Os completed (+1694) 00:15:48.265 00:15:48.832 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:48.832 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:48.832 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:48.832 [2024-11-25 10:21:43.079054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:48.832 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:48.832 [2024-11-25 10:21:43.082402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 [2024-11-25 10:21:43.082836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 [2024-11-25 10:21:43.082959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 [2024-11-25 10:21:43.083080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:48.832 [2024-11-25 10:21:43.086221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 [2024-11-25 10:21:43.086292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 [2024-11-25 10:21:43.086319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 [2024-11-25 10:21:43.086344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:48.832 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:48.832 [2024-11-25 10:21:43.106695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:48.832 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:48.832 [2024-11-25 10:21:43.108795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 [2024-11-25 10:21:43.108995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.832 [2024-11-25 10:21:43.109085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.833 [2024-11-25 10:21:43.109146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.833 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:48.833 [2024-11-25 10:21:43.112374] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.833 [2024-11-25 10:21:43.112436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.833 [2024-11-25 10:21:43.112481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.833 [2024-11-25 10:21:43.112502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.833 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:48.833 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:49.090 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:49.091 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:49.091 Attaching to 0000:00:10.0 00:15:49.091 Attached to 0000:00:10.0 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:49.091 10:21:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:49.091 Attaching to 0000:00:11.0 00:15:49.091 Attached to 0000:00:11.0 00:15:50.026 QEMU NVMe Ctrl (12340 ): 1631 I/Os completed (+1631) 00:15:50.026 QEMU NVMe Ctrl (12341 ): 1485 I/Os completed (+1485) 00:15:50.026 00:15:51.399 QEMU NVMe Ctrl (12340 ): 3297 I/Os completed (+1666) 00:15:51.399 QEMU NVMe Ctrl (12341 ): 3176 I/Os completed (+1691) 00:15:51.399 00:15:52.334 QEMU NVMe Ctrl (12340 ): 5009 I/Os completed (+1712) 00:15:52.334 QEMU NVMe Ctrl (12341 ): 4928 I/Os completed (+1752) 00:15:52.334 00:15:53.269 QEMU NVMe Ctrl (12340 ): 6635 I/Os completed (+1626) 00:15:53.269 QEMU NVMe Ctrl (12341 ): 6564 I/Os completed (+1636) 00:15:53.269 00:15:54.202 QEMU NVMe Ctrl (12340 ): 8371 I/Os completed (+1736) 00:15:54.202 QEMU NVMe Ctrl (12341 ): 8330 I/Os completed (+1766) 00:15:54.202 00:15:55.137 QEMU NVMe Ctrl (12340 ): 10094 I/Os completed (+1723) 00:15:55.137 QEMU NVMe Ctrl (12341 ): 10063 I/Os completed (+1733) 00:15:55.137 00:15:56.089 QEMU NVMe Ctrl (12340 ): 11782 I/Os completed (+1688) 00:15:56.089 QEMU NVMe Ctrl (12341 ): 11784 I/Os completed (+1721) 00:15:56.089 00:15:57.026 QEMU NVMe Ctrl (12340 ): 13518 I/Os completed (+1736) 00:15:57.026 QEMU NVMe Ctrl (12341 ): 13537 I/Os completed (+1753) 00:15:57.026 00:15:58.404 QEMU NVMe Ctrl (12340 ): 15230 I/Os completed (+1712) 00:15:58.404 QEMU NVMe Ctrl (12341 ): 15278 I/Os completed (+1741) 00:15:58.404 00:15:59.372 QEMU NVMe Ctrl (12340 ): 16938 I/Os completed (+1708) 00:15:59.372 QEMU NVMe Ctrl (12341 ): 17004 I/Os completed (+1726) 00:15:59.372 00:16:00.308 QEMU NVMe Ctrl (12340 ): 18639 I/Os completed (+1701) 00:16:00.308 QEMU NVMe Ctrl (12341 ): 18727 I/Os completed (+1723) 00:16:00.308 00:16:01.243 QEMU NVMe Ctrl (12340 ): 20342 I/Os completed (+1703) 00:16:01.243 QEMU NVMe Ctrl (12341 ): 20461 I/Os completed (+1734) 00:16:01.243 00:16:01.243 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:01.243 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:01.243 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:01.243 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:01.243 [2024-11-25 10:21:55.424221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:01.243 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:01.243 [2024-11-25 10:21:55.426646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.426942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.427100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.427180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:01.243 [2024-11-25 10:21:55.430583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.430797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.430962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.431136] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:01.243 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:01.243 [2024-11-25 10:21:55.447951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:01.243 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:01.243 [2024-11-25 10:21:55.450120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.450395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.450493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.450566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:01.243 [2024-11-25 10:21:55.453593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.453758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.453917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.243 [2024-11-25 10:21:55.454083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.244 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:01.244 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:01.244 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:01.244 EAL: Scan for (pci) bus failed. 00:16:01.244 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:01.244 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:01.244 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:01.502 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:01.502 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:01.502 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:01.502 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:01.502 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:01.502 Attaching to 0000:00:10.0 00:16:01.502 Attached to 0000:00:10.0 00:16:01.502 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:01.502 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:01.502 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:01.502 Attaching to 0000:00:11.0 00:16:01.502 Attached to 0000:00:11.0 00:16:02.069 QEMU NVMe Ctrl (12340 ): 1224 I/Os completed (+1224) 00:16:02.069 QEMU NVMe Ctrl (12341 ): 1041 I/Os completed (+1041) 00:16:02.069 00:16:03.052 QEMU NVMe Ctrl (12340 ): 2796 I/Os completed (+1572) 00:16:03.052 QEMU NVMe Ctrl (12341 ): 2619 I/Os completed (+1578) 00:16:03.052 00:16:04.015 QEMU NVMe Ctrl (12340 ): 4453 I/Os completed (+1657) 00:16:04.015 QEMU NVMe Ctrl (12341 ): 4299 I/Os completed (+1680) 00:16:04.015 00:16:05.406 QEMU NVMe Ctrl (12340 ): 6125 I/Os completed (+1672) 00:16:05.406 QEMU NVMe Ctrl (12341 ): 6008 I/Os completed (+1709) 00:16:05.406 00:16:06.343 QEMU NVMe Ctrl (12340 ): 7773 I/Os completed (+1648) 00:16:06.343 QEMU NVMe Ctrl (12341 ): 7671 I/Os completed (+1663) 00:16:06.343 00:16:07.276 QEMU NVMe Ctrl (12340 ): 9429 I/Os completed (+1656) 00:16:07.276 QEMU NVMe Ctrl (12341 ): 9352 I/Os completed (+1681) 00:16:07.276 00:16:08.209 QEMU NVMe Ctrl (12340 ): 11072 I/Os completed (+1643) 00:16:08.209 QEMU NVMe Ctrl (12341 ): 11076 I/Os completed (+1724) 00:16:08.209 00:16:09.142 QEMU NVMe Ctrl (12340 ): 12709 I/Os completed (+1637) 00:16:09.142 QEMU NVMe Ctrl (12341 ): 12905 I/Os completed (+1829) 00:16:09.142 00:16:10.079 QEMU NVMe Ctrl (12340 ): 14209 I/Os completed (+1500) 00:16:10.079 QEMU NVMe Ctrl (12341 ): 14492 I/Os completed (+1587) 00:16:10.079 00:16:11.011 QEMU NVMe Ctrl (12340 ): 16334 I/Os completed (+2125) 00:16:11.011 QEMU NVMe Ctrl (12341 ): 17051 I/Os completed (+2559) 00:16:11.011 00:16:12.384 QEMU NVMe Ctrl (12340 ): 17687 I/Os completed (+1353) 00:16:12.384 QEMU NVMe Ctrl (12341 ): 18489 I/Os completed (+1438) 00:16:12.384 00:16:13.317 QEMU NVMe Ctrl (12340 ): 19257 I/Os completed (+1570) 00:16:13.317 QEMU NVMe Ctrl (12341 ): 20126 I/Os completed (+1637) 00:16:13.317 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:13.575 [2024-11-25 10:22:07.728493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:13.575 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:13.575 [2024-11-25 10:22:07.730702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.731019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.731110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.731280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:13.575 [2024-11-25 10:22:07.734846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.735052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.735120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.735256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:13.575 [2024-11-25 10:22:07.757587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:13.575 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:13.575 [2024-11-25 10:22:07.759762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.760027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.760081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.760108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:13.575 [2024-11-25 10:22:07.762931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.762974] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.763005] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 [2024-11-25 10:22:07.763026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:13.575 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:13.832 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:13.832 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:13.832 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:13.832 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:13.832 10:22:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:13.832 Attaching to 0000:00:10.0 00:16:13.832 Attached to 0000:00:10.0 00:16:13.833 10:22:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:13.833 10:22:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:13.833 10:22:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:13.833 Attaching to 0000:00:11.0 00:16:13.833 Attached to 0000:00:11.0 00:16:13.833 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:13.833 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:13.833 [2024-11-25 10:22:08.065224] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:16:26.029 10:22:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:26.029 10:22:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:26.029 10:22:20 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.98 00:16:26.029 10:22:20 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.98 00:16:26.029 10:22:20 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:26.029 10:22:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.98 00:16:26.029 10:22:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.98 2 00:16:26.029 remove_attach_helper took 42.98s to complete (handling 2 nvme drive(s)) 10:22:20 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:16:32.589 10:22:26 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68305 00:16:32.589 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68305) - No such process 00:16:32.589 10:22:26 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68305 00:16:32.589 10:22:26 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:16:32.589 10:22:26 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:16:32.589 10:22:26 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:16:32.589 10:22:26 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68842 00:16:32.589 10:22:26 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.589 10:22:26 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:16:32.589 10:22:26 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68842 00:16:32.589 10:22:26 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68842 ']' 00:16:32.589 10:22:26 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.589 10:22:26 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.589 10:22:26 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.589 10:22:26 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.589 10:22:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:32.589 [2024-11-25 10:22:26.235669] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:16:32.589 [2024-11-25 10:22:26.236268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68842 ] 00:16:32.589 [2024-11-25 10:22:26.429203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.589 [2024-11-25 10:22:26.606615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:16:33.563 10:22:27 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.563 10:22:27 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:16:33.563 10:22:27 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:33.563 10:22:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:33.563 10:22:27 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:16:33.563 10:22:27 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:33.563 10:22:27 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:33.563 10:22:27 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:33.563 10:22:27 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:33.563 10:22:27 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:40.125 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:40.125 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:40.125 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:40.125 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:40.125 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:40.125 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:40.126 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:40.126 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:40.126 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:40.126 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:40.126 10:22:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.126 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:40.126 10:22:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:40.126 [2024-11-25 10:22:33.733613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:40.126 [2024-11-25 10:22:33.737034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.126 [2024-11-25 10:22:33.737121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.126 [2024-11-25 10:22:33.737171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.126 [2024-11-25 10:22:33.737225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.126 [2024-11-25 10:22:33.737248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.126 [2024-11-25 10:22:33.737267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.126 [2024-11-25 10:22:33.737284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.126 [2024-11-25 10:22:33.737301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.126 [2024-11-25 10:22:33.737315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.126 [2024-11-25 10:22:33.737338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.126 [2024-11-25 10:22:33.737352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.126 [2024-11-25 10:22:33.737368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.126 10:22:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.126 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:40.126 10:22:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:40.126 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:40.126 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:40.126 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:40.126 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:40.126 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:40.126 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:40.126 10:22:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.126 10:22:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:40.126 10:22:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.126 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:40.126 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:40.126 [2024-11-25 10:22:34.433625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:40.126 [2024-11-25 10:22:34.437639] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.126 [2024-11-25 10:22:34.437689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.126 [2024-11-25 10:22:34.437731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.126 [2024-11-25 10:22:34.437765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.126 [2024-11-25 10:22:34.437783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.126 [2024-11-25 10:22:34.437818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.126 [2024-11-25 10:22:34.437839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.126 [2024-11-25 10:22:34.437853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.126 [2024-11-25 10:22:34.437870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.126 [2024-11-25 10:22:34.437884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.126 [2024-11-25 10:22:34.437900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.126 [2024-11-25 10:22:34.437914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.697 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:40.697 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:40.697 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:40.697 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:40.697 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:40.697 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:40.697 10:22:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.697 10:22:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:40.697 10:22:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.698 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:40.698 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:40.698 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:40.698 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:40.698 10:22:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:40.979 10:22:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:40.979 10:22:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:40.979 10:22:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:40.979 10:22:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:40.979 10:22:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:40.979 10:22:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:40.979 10:22:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:40.979 10:22:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:53.185 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:53.186 10:22:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:53.186 10:22:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:53.186 10:22:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:53.186 [2024-11-25 10:22:47.233829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:53.186 [2024-11-25 10:22:47.237519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.186 [2024-11-25 10:22:47.237700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.186 [2024-11-25 10:22:47.237893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.186 [2024-11-25 10:22:47.237949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.186 [2024-11-25 10:22:47.237969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.186 [2024-11-25 10:22:47.237988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.186 [2024-11-25 10:22:47.238006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.186 [2024-11-25 10:22:47.238024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.186 [2024-11-25 10:22:47.238038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.186 [2024-11-25 10:22:47.238057] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.186 [2024-11-25 10:22:47.238072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.186 [2024-11-25 10:22:47.238090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:53.186 10:22:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.186 10:22:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:53.186 10:22:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:53.186 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:53.445 [2024-11-25 10:22:47.633847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:53.445 [2024-11-25 10:22:47.637182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.445 [2024-11-25 10:22:47.637236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.445 [2024-11-25 10:22:47.637268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.445 [2024-11-25 10:22:47.637300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.445 [2024-11-25 10:22:47.637321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.445 [2024-11-25 10:22:47.637336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.445 [2024-11-25 10:22:47.637356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.445 [2024-11-25 10:22:47.637371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.445 [2024-11-25 10:22:47.637403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.445 [2024-11-25 10:22:47.637418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.445 [2024-11-25 10:22:47.637434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.445 [2024-11-25 10:22:47.637448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:53.703 10:22:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.703 10:22:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:53.703 10:22:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:53.703 10:22:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:53.703 10:22:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:53.962 10:22:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:53.962 10:22:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:53.962 10:22:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:53.962 10:22:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:53.962 10:22:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:53.962 10:22:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:53.962 10:22:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:06.192 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:06.192 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:06.192 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:06.192 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:06.192 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:06.192 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:06.192 10:23:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.193 10:23:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:06.193 10:23:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:06.193 10:23:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.193 10:23:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:06.193 [2024-11-25 10:23:00.234160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:06.193 [2024-11-25 10:23:00.238292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.193 [2024-11-25 10:23:00.238357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.193 [2024-11-25 10:23:00.238382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.193 [2024-11-25 10:23:00.238420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.193 [2024-11-25 10:23:00.238437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.193 [2024-11-25 10:23:00.238459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.193 [2024-11-25 10:23:00.238476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.193 [2024-11-25 10:23:00.238494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.193 [2024-11-25 10:23:00.238508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.193 [2024-11-25 10:23:00.238527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.193 [2024-11-25 10:23:00.238542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.193 [2024-11-25 10:23:00.238559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.193 10:23:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:06.193 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:06.450 [2024-11-25 10:23:00.634183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:06.450 [2024-11-25 10:23:00.638365] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.450 [2024-11-25 10:23:00.638424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.450 [2024-11-25 10:23:00.638459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.450 [2024-11-25 10:23:00.638496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.450 [2024-11-25 10:23:00.638523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.450 [2024-11-25 10:23:00.638540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.450 [2024-11-25 10:23:00.638566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.450 [2024-11-25 10:23:00.638584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.450 [2024-11-25 10:23:00.638611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.450 [2024-11-25 10:23:00.638630] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.450 [2024-11-25 10:23:00.638651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.450 [2024-11-25 10:23:00.638667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:06.708 10:23:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.708 10:23:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:06.708 10:23:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:06.708 10:23:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:06.708 10:23:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:06.965 10:23:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:06.965 10:23:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:06.965 10:23:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:06.965 10:23:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:06.965 10:23:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:06.965 10:23:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:06.965 10:23:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.58 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.58 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.58 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.58 2 00:17:19.196 remove_attach_helper took 45.58s to complete (handling 2 nvme drive(s)) 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:17:19.196 10:23:13 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:19.196 10:23:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:25.756 10:23:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.756 10:23:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:25.756 10:23:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.756 [2024-11-25 10:23:19.345613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:25.756 [2024-11-25 10:23:19.349041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:25.756 [2024-11-25 10:23:19.349124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.756 [2024-11-25 10:23:19.349149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.756 [2024-11-25 10:23:19.349192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:25.756 [2024-11-25 10:23:19.349211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.756 [2024-11-25 10:23:19.349233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.756 [2024-11-25 10:23:19.349250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:25.756 [2024-11-25 10:23:19.349272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.756 [2024-11-25 10:23:19.349288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.756 [2024-11-25 10:23:19.349310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:25.756 [2024-11-25 10:23:19.349326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.756 [2024-11-25 10:23:19.349353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:25.756 [2024-11-25 10:23:19.745622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:25.756 [2024-11-25 10:23:19.748056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:25.756 [2024-11-25 10:23:19.748110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.756 [2024-11-25 10:23:19.748138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.756 [2024-11-25 10:23:19.748172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:25.756 [2024-11-25 10:23:19.748192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.756 [2024-11-25 10:23:19.748208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.756 [2024-11-25 10:23:19.748229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:25.756 [2024-11-25 10:23:19.748244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.756 [2024-11-25 10:23:19.748264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.756 [2024-11-25 10:23:19.748281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:25.756 [2024-11-25 10:23:19.748298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.756 [2024-11-25 10:23:19.748313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:25.756 10:23:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.756 10:23:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:25.756 10:23:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:25.756 10:23:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:25.756 10:23:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:26.014 10:23:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:26.014 10:23:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:26.014 10:23:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:26.014 10:23:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:26.014 10:23:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:26.014 10:23:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:26.014 10:23:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:38.363 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:38.364 10:23:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.364 10:23:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:38.364 10:23:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:38.364 10:23:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.364 10:23:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:38.364 10:23:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.364 [2024-11-25 10:23:32.345829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:38.364 [2024-11-25 10:23:32.349325] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.364 [2024-11-25 10:23:32.349398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.364 [2024-11-25 10:23:32.349424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.364 [2024-11-25 10:23:32.349463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.364 [2024-11-25 10:23:32.349480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.364 [2024-11-25 10:23:32.349498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.364 [2024-11-25 10:23:32.349515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.364 [2024-11-25 10:23:32.349534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.364 [2024-11-25 10:23:32.349548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.364 [2024-11-25 10:23:32.349567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.364 [2024-11-25 10:23:32.349581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.364 [2024-11-25 10:23:32.349601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:38.364 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:38.624 [2024-11-25 10:23:32.745801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:38.624 [2024-11-25 10:23:32.748363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.624 [2024-11-25 10:23:32.748420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.624 [2024-11-25 10:23:32.748448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.624 [2024-11-25 10:23:32.748483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.624 [2024-11-25 10:23:32.748508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.624 [2024-11-25 10:23:32.748523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.624 [2024-11-25 10:23:32.748547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.624 [2024-11-25 10:23:32.748561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.625 [2024-11-25 10:23:32.748580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.625 [2024-11-25 10:23:32.748596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.625 [2024-11-25 10:23:32.748613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.625 [2024-11-25 10:23:32.748627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.625 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:38.625 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:38.625 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:38.625 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:38.625 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:38.625 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:38.625 10:23:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.625 10:23:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:38.625 10:23:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.625 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:38.625 10:23:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:38.886 10:23:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:51.089 10:23:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.089 10:23:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:51.089 10:23:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:51.089 10:23:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.089 10:23:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:51.089 10:23:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.089 [2024-11-25 10:23:45.345993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:51.089 [2024-11-25 10:23:45.349264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:51.089 [2024-11-25 10:23:45.349319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.089 [2024-11-25 10:23:45.349342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.089 [2024-11-25 10:23:45.349379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:51.089 [2024-11-25 10:23:45.349395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.089 [2024-11-25 10:23:45.349414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.089 [2024-11-25 10:23:45.349433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:51.089 [2024-11-25 10:23:45.349454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.089 [2024-11-25 10:23:45.349468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.089 [2024-11-25 10:23:45.349487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:51.089 [2024-11-25 10:23:45.349500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.089 [2024-11-25 10:23:45.349517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:51.089 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:51.656 [2024-11-25 10:23:45.746005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:51.656 [2024-11-25 10:23:45.752408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:51.656 [2024-11-25 10:23:45.752465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.656 [2024-11-25 10:23:45.752493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.656 [2024-11-25 10:23:45.752527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:51.656 [2024-11-25 10:23:45.752547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.656 [2024-11-25 10:23:45.752563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.656 [2024-11-25 10:23:45.752584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:51.656 [2024-11-25 10:23:45.752599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.656 [2024-11-25 10:23:45.752617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.656 [2024-11-25 10:23:45.752632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:51.656 [2024-11-25 10:23:45.752654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.656 [2024-11-25 10:23:45.752668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.656 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:51.656 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:51.656 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:51.656 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:51.656 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:51.656 10:23:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.656 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:51.656 10:23:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:51.656 10:23:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.656 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:51.656 10:23:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:51.915 10:23:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:04.125 10:23:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.125 10:23:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:04.125 10:23:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:04.125 10:23:58 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.00 00:18:04.125 10:23:58 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.00 00:18:04.125 10:23:58 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.00 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.00 2 00:18:04.125 remove_attach_helper took 45.00s to complete (handling 2 nvme drive(s)) 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:18:04.125 10:23:58 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68842 00:18:04.125 10:23:58 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68842 ']' 00:18:04.125 10:23:58 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68842 00:18:04.125 10:23:58 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:18:04.126 10:23:58 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.126 10:23:58 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68842 00:18:04.126 10:23:58 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.126 killing process with pid 68842 00:18:04.126 10:23:58 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.126 10:23:58 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68842' 00:18:04.126 10:23:58 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68842 00:18:04.126 10:23:58 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68842 00:18:06.657 10:24:00 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:06.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:07.488 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:07.488 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:07.488 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.488 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.488 00:18:07.488 real 2m32.701s 00:18:07.488 user 1m53.320s 00:18:07.488 sys 0m19.069s 00:18:07.488 10:24:01 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.489 ************************************ 00:18:07.489 END TEST sw_hotplug 00:18:07.489 ************************************ 00:18:07.489 10:24:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:07.489 10:24:01 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:18:07.489 10:24:01 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:07.489 10:24:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.489 10:24:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.489 10:24:01 -- common/autotest_common.sh@10 -- # set +x 00:18:07.489 ************************************ 00:18:07.489 START TEST nvme_xnvme 00:18:07.489 ************************************ 00:18:07.489 10:24:01 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:07.749 * Looking for test storage... 00:18:07.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:07.749 10:24:01 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:07.749 10:24:01 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:07.749 10:24:01 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.750 10:24:01 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:07.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.750 --rc genhtml_branch_coverage=1 00:18:07.750 --rc genhtml_function_coverage=1 00:18:07.750 --rc genhtml_legend=1 00:18:07.750 --rc geninfo_all_blocks=1 00:18:07.750 --rc geninfo_unexecuted_blocks=1 00:18:07.750 00:18:07.750 ' 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:07.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.750 --rc genhtml_branch_coverage=1 00:18:07.750 --rc genhtml_function_coverage=1 00:18:07.750 --rc genhtml_legend=1 00:18:07.750 --rc geninfo_all_blocks=1 00:18:07.750 --rc geninfo_unexecuted_blocks=1 00:18:07.750 00:18:07.750 ' 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:07.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.750 --rc genhtml_branch_coverage=1 00:18:07.750 --rc genhtml_function_coverage=1 00:18:07.750 --rc genhtml_legend=1 00:18:07.750 --rc geninfo_all_blocks=1 00:18:07.750 --rc geninfo_unexecuted_blocks=1 00:18:07.750 00:18:07.750 ' 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:07.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.750 --rc genhtml_branch_coverage=1 00:18:07.750 --rc genhtml_function_coverage=1 00:18:07.750 --rc genhtml_legend=1 00:18:07.750 --rc geninfo_all_blocks=1 00:18:07.750 --rc geninfo_unexecuted_blocks=1 00:18:07.750 00:18:07.750 ' 00:18:07.750 10:24:01 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:18:07.750 10:24:01 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:07.750 10:24:01 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:07.750 10:24:01 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:18:07.751 10:24:01 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:18:07.751 10:24:01 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:07.751 10:24:01 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:07.752 10:24:01 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:07.752 10:24:01 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:07.752 10:24:01 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:07.752 10:24:01 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:07.752 #define SPDK_CONFIG_H 00:18:07.752 #define SPDK_CONFIG_AIO_FSDEV 1 00:18:07.752 #define SPDK_CONFIG_APPS 1 00:18:07.752 #define SPDK_CONFIG_ARCH native 00:18:07.752 #define SPDK_CONFIG_ASAN 1 00:18:07.752 #undef SPDK_CONFIG_AVAHI 00:18:07.752 #undef SPDK_CONFIG_CET 00:18:07.752 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:18:07.752 #define SPDK_CONFIG_COVERAGE 1 00:18:07.752 #define SPDK_CONFIG_CROSS_PREFIX 00:18:07.752 #undef SPDK_CONFIG_CRYPTO 00:18:07.752 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:07.752 #undef SPDK_CONFIG_CUSTOMOCF 00:18:07.752 #undef SPDK_CONFIG_DAOS 00:18:07.752 #define SPDK_CONFIG_DAOS_DIR 00:18:07.752 #define SPDK_CONFIG_DEBUG 1 00:18:07.752 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:07.752 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:07.752 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:07.752 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:07.752 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:07.752 #undef SPDK_CONFIG_DPDK_UADK 00:18:07.752 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:07.752 #define SPDK_CONFIG_EXAMPLES 1 00:18:07.752 #undef SPDK_CONFIG_FC 00:18:07.752 #define SPDK_CONFIG_FC_PATH 00:18:07.752 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:07.752 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:07.752 #define SPDK_CONFIG_FSDEV 1 00:18:07.752 #undef SPDK_CONFIG_FUSE 00:18:07.752 #undef SPDK_CONFIG_FUZZER 00:18:07.752 #define SPDK_CONFIG_FUZZER_LIB 00:18:07.752 #undef SPDK_CONFIG_GOLANG 00:18:07.752 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:07.752 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:07.752 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:07.752 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:07.752 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:07.752 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:07.752 #undef SPDK_CONFIG_HAVE_LZ4 00:18:07.752 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:18:07.752 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:18:07.752 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:07.752 #define SPDK_CONFIG_IDXD 1 00:18:07.752 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:07.752 #undef SPDK_CONFIG_IPSEC_MB 00:18:07.752 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:07.752 #define SPDK_CONFIG_ISAL 1 00:18:07.752 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:07.752 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:07.752 #define SPDK_CONFIG_LIBDIR 00:18:07.752 #undef SPDK_CONFIG_LTO 00:18:07.752 #define SPDK_CONFIG_MAX_LCORES 128 00:18:07.752 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:18:07.752 #define SPDK_CONFIG_NVME_CUSE 1 00:18:07.752 #undef SPDK_CONFIG_OCF 00:18:07.752 #define SPDK_CONFIG_OCF_PATH 00:18:07.752 #define SPDK_CONFIG_OPENSSL_PATH 00:18:07.752 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:07.752 #define SPDK_CONFIG_PGO_DIR 00:18:07.752 #undef SPDK_CONFIG_PGO_USE 00:18:07.752 #define SPDK_CONFIG_PREFIX /usr/local 00:18:07.752 #undef SPDK_CONFIG_RAID5F 00:18:07.752 #undef SPDK_CONFIG_RBD 00:18:07.752 #define SPDK_CONFIG_RDMA 1 00:18:07.752 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:07.752 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:07.752 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:07.752 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:07.752 #define SPDK_CONFIG_SHARED 1 00:18:07.752 #undef SPDK_CONFIG_SMA 00:18:07.752 #define SPDK_CONFIG_TESTS 1 00:18:07.752 #undef SPDK_CONFIG_TSAN 00:18:07.752 #define SPDK_CONFIG_UBLK 1 00:18:07.752 #define SPDK_CONFIG_UBSAN 1 00:18:07.752 #undef SPDK_CONFIG_UNIT_TESTS 00:18:07.752 #undef SPDK_CONFIG_URING 00:18:07.752 #define SPDK_CONFIG_URING_PATH 00:18:07.752 #undef SPDK_CONFIG_URING_ZNS 00:18:07.752 #undef SPDK_CONFIG_USDT 00:18:07.752 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:07.752 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:07.752 #undef SPDK_CONFIG_VFIO_USER 00:18:07.752 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:07.752 #define SPDK_CONFIG_VHOST 1 00:18:07.752 #define SPDK_CONFIG_VIRTIO 1 00:18:07.752 #undef SPDK_CONFIG_VTUNE 00:18:07.752 #define SPDK_CONFIG_VTUNE_DIR 00:18:07.752 #define SPDK_CONFIG_WERROR 1 00:18:07.752 #define SPDK_CONFIG_WPDK_DIR 00:18:07.752 #define SPDK_CONFIG_XNVME 1 00:18:07.752 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:07.752 10:24:01 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:07.752 10:24:02 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.752 10:24:02 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.752 10:24:02 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.752 10:24:02 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.752 10:24:02 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.752 10:24:02 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.752 10:24:02 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.753 10:24:02 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.753 10:24:02 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:07.753 10:24:02 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@68 -- # uname -s 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:18:07.753 10:24:02 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:18:07.753 10:24:02 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:07.754 10:24:02 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70196 ]] 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70196 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.vdmDut 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.vdmDut/tests/xnvme /tmp/spdk.vdmDut 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13968883712 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:07.755 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5598969856 00:18:07.756 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261665792 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13968883712 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5598969856 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=92308180992 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:18:08.015 10:24:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=7394598912 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:18:08.016 * Looking for test storage... 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13968883712 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:08.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:08.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.016 --rc genhtml_branch_coverage=1 00:18:08.016 --rc genhtml_function_coverage=1 00:18:08.016 --rc genhtml_legend=1 00:18:08.016 --rc geninfo_all_blocks=1 00:18:08.016 --rc geninfo_unexecuted_blocks=1 00:18:08.016 00:18:08.016 ' 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:08.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.016 --rc genhtml_branch_coverage=1 00:18:08.016 --rc genhtml_function_coverage=1 00:18:08.016 --rc genhtml_legend=1 00:18:08.016 --rc geninfo_all_blocks=1 00:18:08.016 --rc geninfo_unexecuted_blocks=1 00:18:08.016 00:18:08.016 ' 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:08.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.016 --rc genhtml_branch_coverage=1 00:18:08.016 --rc genhtml_function_coverage=1 00:18:08.016 --rc genhtml_legend=1 00:18:08.016 --rc geninfo_all_blocks=1 00:18:08.016 --rc geninfo_unexecuted_blocks=1 00:18:08.016 00:18:08.016 ' 00:18:08.016 10:24:02 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:08.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.016 --rc genhtml_branch_coverage=1 00:18:08.016 --rc genhtml_function_coverage=1 00:18:08.016 --rc genhtml_legend=1 00:18:08.016 --rc geninfo_all_blocks=1 00:18:08.016 --rc geninfo_unexecuted_blocks=1 00:18:08.016 00:18:08.016 ' 00:18:08.016 10:24:02 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.016 10:24:02 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.016 10:24:02 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.016 10:24:02 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.016 10:24:02 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.016 10:24:02 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:08.016 10:24:02 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:18:08.016 10:24:02 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:18:08.017 10:24:02 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:18:08.017 10:24:02 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:08.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:08.534 Waiting for block devices as requested 00:18:08.534 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:08.791 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:08.792 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:08.792 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:14.095 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:14.095 10:24:08 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:18:14.354 10:24:08 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:18:14.354 10:24:08 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:18:14.354 10:24:08 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:18:14.354 10:24:08 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:18:14.354 10:24:08 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:18:14.354 10:24:08 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:18:14.354 10:24:08 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:18:14.612 No valid GPT data, bailing 00:18:14.612 10:24:08 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:14.612 10:24:08 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:18:14.612 10:24:08 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:14.612 10:24:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:14.612 10:24:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:14.612 10:24:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.612 10:24:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:14.613 ************************************ 00:18:14.613 START TEST xnvme_rpc 00:18:14.613 ************************************ 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70583 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70583 00:18:14.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70583 ']' 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.613 10:24:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.613 [2024-11-25 10:24:08.835282] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:18:14.613 [2024-11-25 10:24:08.835437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70583 ] 00:18:14.871 [2024-11-25 10:24:09.017300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.871 [2024-11-25 10:24:09.171321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.248 xnvme_bdev 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70583 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70583 ']' 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70583 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70583 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:16.248 killing process with pid 70583 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70583' 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70583 00:18:16.248 10:24:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70583 00:18:18.775 00:18:18.775 real 0m4.120s 00:18:18.775 user 0m4.191s 00:18:18.775 sys 0m0.650s 00:18:18.775 10:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.775 10:24:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.775 ************************************ 00:18:18.775 END TEST xnvme_rpc 00:18:18.775 ************************************ 00:18:18.775 10:24:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:18.775 10:24:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.775 10:24:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.775 10:24:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:18.775 ************************************ 00:18:18.775 START TEST xnvme_bdevperf 00:18:18.775 ************************************ 00:18:18.775 10:24:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:18.775 10:24:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:18.775 10:24:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:18:18.775 10:24:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:18.775 10:24:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:18.775 10:24:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:18.775 10:24:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:18.775 10:24:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:18.775 { 00:18:18.775 "subsystems": [ 00:18:18.775 { 00:18:18.775 "subsystem": "bdev", 00:18:18.775 "config": [ 00:18:18.775 { 00:18:18.775 "params": { 00:18:18.775 "io_mechanism": "libaio", 00:18:18.775 "conserve_cpu": false, 00:18:18.775 "filename": "/dev/nvme0n1", 00:18:18.775 "name": "xnvme_bdev" 00:18:18.775 }, 00:18:18.775 "method": "bdev_xnvme_create" 00:18:18.775 }, 00:18:18.775 { 00:18:18.775 "method": "bdev_wait_for_examine" 00:18:18.775 } 00:18:18.775 ] 00:18:18.776 } 00:18:18.776 ] 00:18:18.776 } 00:18:18.776 [2024-11-25 10:24:12.978058] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:18:18.776 [2024-11-25 10:24:12.978223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70668 ] 00:18:19.034 [2024-11-25 10:24:13.153739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.034 [2024-11-25 10:24:13.300729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.600 Running I/O for 5 seconds... 00:18:21.531 30614.00 IOPS, 119.59 MiB/s [2024-11-25T10:24:16.800Z] 30627.50 IOPS, 119.64 MiB/s [2024-11-25T10:24:17.733Z] 30996.33 IOPS, 121.08 MiB/s [2024-11-25T10:24:19.112Z] 30656.50 IOPS, 119.75 MiB/s 00:18:24.779 Latency(us) 00:18:24.779 [2024-11-25T10:24:19.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.779 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:24.779 xnvme_bdev : 5.00 30462.06 118.99 0.00 0.00 2096.37 286.72 7596.22 00:18:24.779 [2024-11-25T10:24:19.112Z] =================================================================================================================== 00:18:24.779 [2024-11-25T10:24:19.112Z] Total : 30462.06 118.99 0.00 0.00 2096.37 286.72 7596.22 00:18:25.714 10:24:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:25.714 10:24:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:25.714 10:24:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:25.714 10:24:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:25.714 10:24:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:25.714 { 00:18:25.714 "subsystems": [ 00:18:25.714 { 00:18:25.714 "subsystem": "bdev", 00:18:25.715 "config": [ 00:18:25.715 { 00:18:25.715 "params": { 00:18:25.715 "io_mechanism": "libaio", 00:18:25.715 "conserve_cpu": false, 00:18:25.715 "filename": "/dev/nvme0n1", 00:18:25.715 "name": "xnvme_bdev" 00:18:25.715 }, 00:18:25.715 "method": "bdev_xnvme_create" 00:18:25.715 }, 00:18:25.715 { 00:18:25.715 "method": "bdev_wait_for_examine" 00:18:25.715 } 00:18:25.715 ] 00:18:25.715 } 00:18:25.715 ] 00:18:25.715 } 00:18:25.715 [2024-11-25 10:24:20.036111] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:18:25.715 [2024-11-25 10:24:20.036299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70752 ] 00:18:25.973 [2024-11-25 10:24:20.238552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.231 [2024-11-25 10:24:20.397498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.798 Running I/O for 5 seconds... 00:18:28.666 23398.00 IOPS, 91.40 MiB/s [2024-11-25T10:24:23.933Z] 24030.00 IOPS, 93.87 MiB/s [2024-11-25T10:24:24.868Z] 25042.00 IOPS, 97.82 MiB/s [2024-11-25T10:24:26.242Z] 25859.75 IOPS, 101.01 MiB/s 00:18:31.909 Latency(us) 00:18:31.909 [2024-11-25T10:24:26.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.909 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:31.909 xnvme_bdev : 5.00 25995.03 101.54 0.00 0.00 2456.35 186.18 5242.88 00:18:31.909 [2024-11-25T10:24:26.242Z] =================================================================================================================== 00:18:31.909 [2024-11-25T10:24:26.242Z] Total : 25995.03 101.54 0.00 0.00 2456.35 186.18 5242.88 00:18:32.841 00:18:32.841 real 0m14.118s 00:18:32.841 user 0m5.420s 00:18:32.841 sys 0m6.225s 00:18:32.841 10:24:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.841 10:24:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:32.841 ************************************ 00:18:32.841 END TEST xnvme_bdevperf 00:18:32.841 ************************************ 00:18:32.841 10:24:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:32.841 10:24:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:32.841 10:24:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.841 10:24:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:32.841 ************************************ 00:18:32.841 START TEST xnvme_fio_plugin 00:18:32.841 ************************************ 00:18:32.841 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:32.841 10:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:32.841 10:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:18:32.841 10:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:32.841 10:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:32.842 10:24:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.842 { 00:18:32.842 "subsystems": [ 00:18:32.842 { 00:18:32.842 "subsystem": "bdev", 00:18:32.842 "config": [ 00:18:32.842 { 00:18:32.842 "params": { 00:18:32.842 "io_mechanism": "libaio", 00:18:32.842 "conserve_cpu": false, 00:18:32.842 "filename": "/dev/nvme0n1", 00:18:32.842 "name": "xnvme_bdev" 00:18:32.842 }, 00:18:32.842 "method": "bdev_xnvme_create" 00:18:32.842 }, 00:18:32.842 { 00:18:32.842 "method": "bdev_wait_for_examine" 00:18:32.842 } 00:18:32.842 ] 00:18:32.842 } 00:18:32.842 ] 00:18:32.842 } 00:18:33.099 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:33.099 fio-3.35 00:18:33.099 Starting 1 thread 00:18:39.700 00:18:39.700 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70879: Mon Nov 25 10:24:33 2024 00:18:39.700 read: IOPS=26.0k, BW=102MiB/s (107MB/s)(508MiB/5001msec) 00:18:39.700 slat (usec): min=5, max=872, avg=34.11, stdev=29.90 00:18:39.700 clat (usec): min=115, max=5404, avg=1368.21, stdev=782.63 00:18:39.700 lat (usec): min=167, max=5442, avg=1402.32, stdev=786.67 00:18:39.700 clat percentiles (usec): 00:18:39.700 | 1.00th=[ 237], 5.00th=[ 355], 10.00th=[ 469], 20.00th=[ 676], 00:18:39.700 | 30.00th=[ 857], 40.00th=[ 1037], 50.00th=[ 1221], 60.00th=[ 1434], 00:18:39.700 | 70.00th=[ 1696], 80.00th=[ 2024], 90.00th=[ 2474], 95.00th=[ 2802], 00:18:39.700 | 99.00th=[ 3687], 99.50th=[ 4015], 99.90th=[ 4490], 99.95th=[ 4686], 00:18:39.700 | 99.99th=[ 5080] 00:18:39.700 bw ( KiB/s): min=91481, max=114984, per=98.26%, avg=102283.67, stdev=9085.40, samples=9 00:18:39.700 iops : min=22870, max=28746, avg=25570.89, stdev=2271.39, samples=9 00:18:39.700 lat (usec) : 250=1.37%, 500=10.03%, 750=12.62%, 1000=14.14% 00:18:39.700 lat (msec) : 2=41.29%, 4=20.03%, 10=0.53% 00:18:39.700 cpu : usr=25.84%, sys=52.14%, ctx=79, majf=0, minf=764 00:18:39.700 IO depths : 1=0.1%, 2=1.6%, 4=5.1%, 8=11.7%, 16=25.4%, 32=54.3%, >=64=1.7% 00:18:39.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.700 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:18:39.700 issued rwts: total=130139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.700 00:18:39.700 Run status group 0 (all jobs): 00:18:39.700 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=508MiB (533MB), run=5001-5001msec 00:18:40.635 ----------------------------------------------------- 00:18:40.635 Suppressions used: 00:18:40.635 count bytes template 00:18:40.635 1 11 /usr/src/fio/parse.c 00:18:40.635 1 8 libtcmalloc_minimal.so 00:18:40.635 1 904 libcrypto.so 00:18:40.635 ----------------------------------------------------- 00:18:40.635 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:40.635 10:24:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:40.635 { 00:18:40.635 "subsystems": [ 00:18:40.635 { 00:18:40.635 "subsystem": "bdev", 00:18:40.635 "config": [ 00:18:40.635 { 00:18:40.635 "params": { 00:18:40.635 "io_mechanism": "libaio", 00:18:40.635 "conserve_cpu": false, 00:18:40.635 "filename": "/dev/nvme0n1", 00:18:40.635 "name": "xnvme_bdev" 00:18:40.635 }, 00:18:40.635 "method": "bdev_xnvme_create" 00:18:40.635 }, 00:18:40.635 { 00:18:40.635 "method": "bdev_wait_for_examine" 00:18:40.635 } 00:18:40.635 ] 00:18:40.635 } 00:18:40.635 ] 00:18:40.635 } 00:18:40.635 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:40.635 fio-3.35 00:18:40.635 Starting 1 thread 00:18:47.213 00:18:47.213 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70982: Mon Nov 25 10:24:40 2024 00:18:47.213 write: IOPS=23.9k, BW=93.4MiB/s (97.9MB/s)(467MiB/5001msec); 0 zone resets 00:18:47.213 slat (usec): min=5, max=914, avg=37.55, stdev=27.86 00:18:47.213 clat (usec): min=115, max=5770, avg=1463.87, stdev=809.67 00:18:47.213 lat (usec): min=177, max=5813, avg=1501.42, stdev=812.87 00:18:47.213 clat percentiles (usec): 00:18:47.213 | 1.00th=[ 247], 5.00th=[ 363], 10.00th=[ 478], 20.00th=[ 701], 00:18:47.213 | 30.00th=[ 922], 40.00th=[ 1123], 50.00th=[ 1352], 60.00th=[ 1598], 00:18:47.213 | 70.00th=[ 1876], 80.00th=[ 2180], 90.00th=[ 2573], 95.00th=[ 2868], 00:18:47.213 | 99.00th=[ 3654], 99.50th=[ 3982], 99.90th=[ 4555], 99.95th=[ 4817], 00:18:47.213 | 99.99th=[ 5276] 00:18:47.213 bw ( KiB/s): min=85296, max=116312, per=100.00%, avg=96728.89, stdev=9576.75, samples=9 00:18:47.213 iops : min=21324, max=29078, avg=24182.22, stdev=2394.19, samples=9 00:18:47.213 lat (usec) : 250=1.11%, 500=9.92%, 750=11.20%, 1000=11.67% 00:18:47.213 lat (msec) : 2=40.32%, 4=25.31%, 10=0.47% 00:18:47.213 cpu : usr=23.62%, sys=54.72%, ctx=129, majf=0, minf=652 00:18:47.213 IO depths : 1=0.1%, 2=1.6%, 4=5.5%, 8=12.2%, 16=25.8%, 32=53.1%, >=64=1.7% 00:18:47.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.213 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:18:47.213 issued rwts: total=0,119581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.213 00:18:47.213 Run status group 0 (all jobs): 00:18:47.213 WRITE: bw=93.4MiB/s (97.9MB/s), 93.4MiB/s-93.4MiB/s (97.9MB/s-97.9MB/s), io=467MiB (490MB), run=5001-5001msec 00:18:48.149 ----------------------------------------------------- 00:18:48.149 Suppressions used: 00:18:48.149 count bytes template 00:18:48.149 1 11 /usr/src/fio/parse.c 00:18:48.149 1 8 libtcmalloc_minimal.so 00:18:48.149 1 904 libcrypto.so 00:18:48.149 ----------------------------------------------------- 00:18:48.149 00:18:48.149 ************************************ 00:18:48.149 END TEST xnvme_fio_plugin 00:18:48.149 ************************************ 00:18:48.149 00:18:48.149 real 0m15.132s 00:18:48.149 user 0m6.465s 00:18:48.149 sys 0m6.121s 00:18:48.149 10:24:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.149 10:24:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:48.149 10:24:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:48.149 10:24:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:48.149 10:24:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:48.149 10:24:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:48.149 10:24:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:48.149 10:24:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.149 10:24:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:48.149 ************************************ 00:18:48.149 START TEST xnvme_rpc 00:18:48.149 ************************************ 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71067 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71067 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71067 ']' 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.149 10:24:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.149 [2024-11-25 10:24:42.372554] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:18:48.149 [2024-11-25 10:24:42.372735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:18:48.407 [2024-11-25 10:24:42.559475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.407 [2024-11-25 10:24:42.691267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.343 xnvme_bdev 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.343 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71067 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71067 ']' 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71067 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71067 00:18:49.603 killing process with pid 71067 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71067' 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71067 00:18:49.603 10:24:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71067 00:18:52.136 ************************************ 00:18:52.136 END TEST xnvme_rpc 00:18:52.136 ************************************ 00:18:52.136 00:18:52.136 real 0m3.837s 00:18:52.136 user 0m4.039s 00:18:52.136 sys 0m0.548s 00:18:52.136 10:24:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.136 10:24:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:52.136 10:24:46 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:52.136 10:24:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:52.136 10:24:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.136 10:24:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:52.136 ************************************ 00:18:52.136 START TEST xnvme_bdevperf 00:18:52.136 ************************************ 00:18:52.136 10:24:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:52.136 10:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:52.136 10:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:18:52.136 10:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:52.136 10:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:52.136 10:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:52.136 10:24:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:52.136 10:24:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:52.136 { 00:18:52.136 "subsystems": [ 00:18:52.136 { 00:18:52.136 "subsystem": "bdev", 00:18:52.136 "config": [ 00:18:52.136 { 00:18:52.136 "params": { 00:18:52.136 "io_mechanism": "libaio", 00:18:52.136 "conserve_cpu": true, 00:18:52.136 "filename": "/dev/nvme0n1", 00:18:52.136 "name": "xnvme_bdev" 00:18:52.136 }, 00:18:52.136 "method": "bdev_xnvme_create" 00:18:52.136 }, 00:18:52.136 { 00:18:52.136 "method": "bdev_wait_for_examine" 00:18:52.136 } 00:18:52.136 ] 00:18:52.136 } 00:18:52.136 ] 00:18:52.136 } 00:18:52.136 [2024-11-25 10:24:46.228167] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:18:52.136 [2024-11-25 10:24:46.228322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71148 ] 00:18:52.136 [2024-11-25 10:24:46.405696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.394 [2024-11-25 10:24:46.554711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.652 Running I/O for 5 seconds... 00:18:54.595 25365.00 IOPS, 99.08 MiB/s [2024-11-25T10:24:50.304Z] 26081.50 IOPS, 101.88 MiB/s [2024-11-25T10:24:51.239Z] 26138.33 IOPS, 102.10 MiB/s [2024-11-25T10:24:52.174Z] 26487.00 IOPS, 103.46 MiB/s 00:18:57.841 Latency(us) 00:18:57.841 [2024-11-25T10:24:52.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.841 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:57.841 xnvme_bdev : 5.00 26533.20 103.65 0.00 0.00 2406.61 216.90 5510.98 00:18:57.841 [2024-11-25T10:24:52.174Z] =================================================================================================================== 00:18:57.841 [2024-11-25T10:24:52.174Z] Total : 26533.20 103.65 0.00 0.00 2406.61 216.90 5510.98 00:18:58.774 10:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:58.774 10:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:58.774 10:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:58.774 10:24:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:58.774 10:24:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:59.032 { 00:18:59.032 "subsystems": [ 00:18:59.032 { 00:18:59.032 "subsystem": "bdev", 00:18:59.032 "config": [ 00:18:59.032 { 00:18:59.032 "params": { 00:18:59.032 "io_mechanism": "libaio", 00:18:59.032 "conserve_cpu": true, 00:18:59.032 "filename": "/dev/nvme0n1", 00:18:59.032 "name": "xnvme_bdev" 00:18:59.032 }, 00:18:59.032 "method": "bdev_xnvme_create" 00:18:59.032 }, 00:18:59.032 { 00:18:59.032 "method": "bdev_wait_for_examine" 00:18:59.032 } 00:18:59.032 ] 00:18:59.032 } 00:18:59.032 ] 00:18:59.032 } 00:18:59.032 [2024-11-25 10:24:53.195166] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:18:59.032 [2024-11-25 10:24:53.195342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71229 ] 00:18:59.291 [2024-11-25 10:24:53.385276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.291 [2024-11-25 10:24:53.560540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.857 Running I/O for 5 seconds... 00:19:01.726 9557.00 IOPS, 37.33 MiB/s [2024-11-25T10:24:56.993Z] 6574.50 IOPS, 25.68 MiB/s [2024-11-25T10:24:58.367Z] 8220.33 IOPS, 32.11 MiB/s [2024-11-25T10:24:59.302Z] 9469.50 IOPS, 36.99 MiB/s [2024-11-25T10:24:59.302Z] 8215.40 IOPS, 32.09 MiB/s 00:19:04.969 Latency(us) 00:19:04.969 [2024-11-25T10:24:59.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.969 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:04.969 xnvme_bdev : 5.02 8202.67 32.04 0.00 0.00 7788.30 50.27 65297.69 00:19:04.969 [2024-11-25T10:24:59.302Z] =================================================================================================================== 00:19:04.969 [2024-11-25T10:24:59.302Z] Total : 8202.67 32.04 0.00 0.00 7788.30 50.27 65297.69 00:19:05.903 00:19:05.903 real 0m14.043s 00:19:05.903 user 0m8.352s 00:19:05.903 sys 0m3.843s 00:19:05.903 10:25:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.903 ************************************ 00:19:05.903 END TEST xnvme_bdevperf 00:19:05.903 ************************************ 00:19:05.903 10:25:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:05.904 10:25:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:05.904 10:25:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.904 10:25:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.904 10:25:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:05.904 ************************************ 00:19:05.904 START TEST xnvme_fio_plugin 00:19:05.904 ************************************ 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:05.904 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:06.162 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:06.162 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:06.162 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:06.163 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:06.163 10:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:06.163 { 00:19:06.163 "subsystems": [ 00:19:06.163 { 00:19:06.163 "subsystem": "bdev", 00:19:06.163 "config": [ 00:19:06.163 { 00:19:06.163 "params": { 00:19:06.163 "io_mechanism": "libaio", 00:19:06.163 "conserve_cpu": true, 00:19:06.163 "filename": "/dev/nvme0n1", 00:19:06.163 "name": "xnvme_bdev" 00:19:06.163 }, 00:19:06.163 "method": "bdev_xnvme_create" 00:19:06.163 }, 00:19:06.163 { 00:19:06.163 "method": "bdev_wait_for_examine" 00:19:06.163 } 00:19:06.163 ] 00:19:06.163 } 00:19:06.163 ] 00:19:06.163 } 00:19:06.163 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:06.163 fio-3.35 00:19:06.163 Starting 1 thread 00:19:12.723 00:19:12.723 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71354: Mon Nov 25 10:25:06 2024 00:19:12.723 read: IOPS=26.9k, BW=105MiB/s (110MB/s)(526MiB/5001msec) 00:19:12.723 slat (usec): min=5, max=879, avg=32.97, stdev=30.04 00:19:12.723 clat (usec): min=54, max=5858, avg=1334.82, stdev=739.71 00:19:12.723 lat (usec): min=80, max=5899, avg=1367.79, stdev=742.42 00:19:12.723 clat percentiles (usec): 00:19:12.723 | 1.00th=[ 239], 5.00th=[ 351], 10.00th=[ 461], 20.00th=[ 668], 00:19:12.723 | 30.00th=[ 857], 40.00th=[ 1029], 50.00th=[ 1205], 60.00th=[ 1418], 00:19:12.723 | 70.00th=[ 1663], 80.00th=[ 1975], 90.00th=[ 2376], 95.00th=[ 2671], 00:19:12.723 | 99.00th=[ 3458], 99.50th=[ 3818], 99.90th=[ 4424], 99.95th=[ 4621], 00:19:12.723 | 99.99th=[ 5014] 00:19:12.723 bw ( KiB/s): min=96528, max=124712, per=100.00%, avg=107798.33, stdev=9454.95, samples=9 00:19:12.723 iops : min=24132, max=31178, avg=26949.56, stdev=2363.75, samples=9 00:19:12.723 lat (usec) : 100=0.01%, 250=1.32%, 500=10.36%, 750=12.66%, 1000=14.12% 00:19:12.723 lat (msec) : 2=42.29%, 4=18.93%, 10=0.31% 00:19:12.723 cpu : usr=23.58%, sys=54.28%, ctx=111, majf=0, minf=764 00:19:12.723 IO depths : 1=0.2%, 2=1.6%, 4=5.1%, 8=11.7%, 16=25.5%, 32=54.2%, >=64=1.7% 00:19:12.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.723 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:12.723 issued rwts: total=134547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.723 00:19:12.723 Run status group 0 (all jobs): 00:19:12.723 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=526MiB (551MB), run=5001-5001msec 00:19:13.673 ----------------------------------------------------- 00:19:13.673 Suppressions used: 00:19:13.673 count bytes template 00:19:13.673 1 11 /usr/src/fio/parse.c 00:19:13.673 1 8 libtcmalloc_minimal.so 00:19:13.674 1 904 libcrypto.so 00:19:13.674 ----------------------------------------------------- 00:19:13.674 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:13.674 10:25:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:13.674 { 00:19:13.674 "subsystems": [ 00:19:13.674 { 00:19:13.674 "subsystem": "bdev", 00:19:13.674 "config": [ 00:19:13.674 { 00:19:13.674 "params": { 00:19:13.674 "io_mechanism": "libaio", 00:19:13.674 "conserve_cpu": true, 00:19:13.674 "filename": "/dev/nvme0n1", 00:19:13.674 "name": "xnvme_bdev" 00:19:13.674 }, 00:19:13.674 "method": "bdev_xnvme_create" 00:19:13.674 }, 00:19:13.674 { 00:19:13.674 "method": "bdev_wait_for_examine" 00:19:13.674 } 00:19:13.674 ] 00:19:13.674 } 00:19:13.674 ] 00:19:13.674 } 00:19:13.932 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:13.932 fio-3.35 00:19:13.932 Starting 1 thread 00:19:20.511 00:19:20.511 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71451: Mon Nov 25 10:25:13 2024 00:19:20.511 write: IOPS=25.4k, BW=99.2MiB/s (104MB/s)(496MiB/5001msec); 0 zone resets 00:19:20.511 slat (usec): min=4, max=3506, avg=35.09, stdev=32.93 00:19:20.511 clat (usec): min=46, max=6723, avg=1402.23, stdev=795.83 00:19:20.511 lat (usec): min=87, max=6871, avg=1437.32, stdev=799.24 00:19:20.511 clat percentiles (usec): 00:19:20.511 | 1.00th=[ 247], 5.00th=[ 363], 10.00th=[ 474], 20.00th=[ 685], 00:19:20.511 | 30.00th=[ 881], 40.00th=[ 1074], 50.00th=[ 1270], 60.00th=[ 1483], 00:19:20.511 | 70.00th=[ 1745], 80.00th=[ 2073], 90.00th=[ 2507], 95.00th=[ 2802], 00:19:20.511 | 99.00th=[ 3720], 99.50th=[ 4080], 99.90th=[ 4948], 99.95th=[ 5211], 00:19:20.511 | 99.99th=[ 5800] 00:19:20.511 bw ( KiB/s): min=88566, max=117912, per=100.00%, avg=101653.89, stdev=9587.67, samples=9 00:19:20.511 iops : min=22141, max=29478, avg=25413.33, stdev=2397.05, samples=9 00:19:20.511 lat (usec) : 50=0.01%, 100=0.01%, 250=1.10%, 500=10.13%, 750=12.04% 00:19:20.511 lat (usec) : 1000=13.06% 00:19:20.511 lat (msec) : 2=41.52%, 4=21.54%, 10=0.61% 00:19:20.511 cpu : usr=24.44%, sys=53.68%, ctx=65, majf=0, minf=764 00:19:20.511 IO depths : 1=0.1%, 2=1.6%, 4=5.2%, 8=12.0%, 16=25.7%, 32=53.6%, >=64=1.7% 00:19:20.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.511 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:20.511 issued rwts: total=0,126965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.511 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:20.511 00:19:20.511 Run status group 0 (all jobs): 00:19:20.511 WRITE: bw=99.2MiB/s (104MB/s), 99.2MiB/s-99.2MiB/s (104MB/s-104MB/s), io=496MiB (520MB), run=5001-5001msec 00:19:21.445 ----------------------------------------------------- 00:19:21.445 Suppressions used: 00:19:21.445 count bytes template 00:19:21.445 1 11 /usr/src/fio/parse.c 00:19:21.445 1 8 libtcmalloc_minimal.so 00:19:21.445 1 904 libcrypto.so 00:19:21.445 ----------------------------------------------------- 00:19:21.445 00:19:21.445 00:19:21.445 real 0m15.273s 00:19:21.445 user 0m6.427s 00:19:21.445 sys 0m6.289s 00:19:21.445 10:25:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.445 10:25:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:21.445 ************************************ 00:19:21.445 END TEST xnvme_fio_plugin 00:19:21.445 ************************************ 00:19:21.445 10:25:15 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:21.445 10:25:15 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:19:21.445 10:25:15 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:19:21.445 10:25:15 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:19:21.445 10:25:15 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:21.445 10:25:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:21.445 10:25:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:21.445 10:25:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:21.445 10:25:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:21.445 10:25:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:21.445 10:25:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.445 10:25:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:21.445 ************************************ 00:19:21.445 START TEST xnvme_rpc 00:19:21.445 ************************************ 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71539 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71539 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71539 ']' 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.445 10:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.445 [2024-11-25 10:25:15.693925] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:19:21.445 [2024-11-25 10:25:15.694102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71539 ] 00:19:21.704 [2024-11-25 10:25:15.886533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.962 [2024-11-25 10:25:16.048293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.894 10:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.895 xnvme_bdev 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.895 10:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71539 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71539 ']' 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71539 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.895 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71539 00:19:23.153 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.153 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.153 killing process with pid 71539 00:19:23.153 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71539' 00:19:23.153 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71539 00:19:23.153 10:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71539 00:19:25.722 00:19:25.722 real 0m3.884s 00:19:25.722 user 0m4.064s 00:19:25.722 sys 0m0.602s 00:19:25.722 10:25:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.722 ************************************ 00:19:25.722 END TEST xnvme_rpc 00:19:25.722 ************************************ 00:19:25.722 10:25:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.722 10:25:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:25.722 10:25:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:25.722 10:25:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.722 10:25:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.722 ************************************ 00:19:25.722 START TEST xnvme_bdevperf 00:19:25.722 ************************************ 00:19:25.722 10:25:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:25.722 10:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:25.722 10:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:19:25.722 10:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:25.722 10:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:25.722 10:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:25.722 10:25:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:25.722 10:25:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:25.722 { 00:19:25.722 "subsystems": [ 00:19:25.722 { 00:19:25.722 "subsystem": "bdev", 00:19:25.722 "config": [ 00:19:25.722 { 00:19:25.722 "params": { 00:19:25.722 "io_mechanism": "io_uring", 00:19:25.722 "conserve_cpu": false, 00:19:25.722 "filename": "/dev/nvme0n1", 00:19:25.722 "name": "xnvme_bdev" 00:19:25.722 }, 00:19:25.722 "method": "bdev_xnvme_create" 00:19:25.722 }, 00:19:25.722 { 00:19:25.722 "method": "bdev_wait_for_examine" 00:19:25.722 } 00:19:25.722 ] 00:19:25.722 } 00:19:25.723 ] 00:19:25.723 } 00:19:25.723 [2024-11-25 10:25:19.657828] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:19:25.723 [2024-11-25 10:25:19.658045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71619 ] 00:19:25.723 [2024-11-25 10:25:19.851821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.723 [2024-11-25 10:25:20.008562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.291 Running I/O for 5 seconds... 00:19:28.163 40404.00 IOPS, 157.83 MiB/s [2024-11-25T10:25:23.433Z] 41769.00 IOPS, 163.16 MiB/s [2024-11-25T10:25:24.369Z] 43259.33 IOPS, 168.98 MiB/s [2024-11-25T10:25:25.745Z] 43010.50 IOPS, 168.01 MiB/s [2024-11-25T10:25:25.745Z] 43703.80 IOPS, 170.72 MiB/s 00:19:31.412 Latency(us) 00:19:31.412 [2024-11-25T10:25:25.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.412 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:31.412 xnvme_bdev : 5.01 43664.66 170.57 0.00 0.00 1461.30 68.42 13881.72 00:19:31.412 [2024-11-25T10:25:25.745Z] =================================================================================================================== 00:19:31.412 [2024-11-25T10:25:25.745Z] Total : 43664.66 170.57 0.00 0.00 1461.30 68.42 13881.72 00:19:32.346 10:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:32.346 10:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:32.346 10:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:32.346 10:25:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:32.346 10:25:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:32.346 { 00:19:32.346 "subsystems": [ 00:19:32.346 { 00:19:32.346 "subsystem": "bdev", 00:19:32.346 "config": [ 00:19:32.346 { 00:19:32.346 "params": { 00:19:32.346 "io_mechanism": "io_uring", 00:19:32.346 "conserve_cpu": false, 00:19:32.346 "filename": "/dev/nvme0n1", 00:19:32.346 "name": "xnvme_bdev" 00:19:32.346 }, 00:19:32.346 "method": "bdev_xnvme_create" 00:19:32.346 }, 00:19:32.346 { 00:19:32.346 "method": "bdev_wait_for_examine" 00:19:32.347 } 00:19:32.347 ] 00:19:32.347 } 00:19:32.347 ] 00:19:32.347 } 00:19:32.347 [2024-11-25 10:25:26.540605] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:19:32.347 [2024-11-25 10:25:26.540815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71700 ] 00:19:32.604 [2024-11-25 10:25:26.717192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.605 [2024-11-25 10:25:26.865868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.170 Running I/O for 5 seconds... 00:19:35.079 42272.00 IOPS, 165.12 MiB/s [2024-11-25T10:25:30.351Z] 41488.00 IOPS, 162.06 MiB/s [2024-11-25T10:25:31.285Z] 41738.67 IOPS, 163.04 MiB/s [2024-11-25T10:25:32.659Z] 41864.00 IOPS, 163.53 MiB/s 00:19:38.326 Latency(us) 00:19:38.326 [2024-11-25T10:25:32.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.326 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:38.326 xnvme_bdev : 5.00 41621.39 162.58 0.00 0.00 1532.41 644.19 5957.82 00:19:38.326 [2024-11-25T10:25:32.659Z] =================================================================================================================== 00:19:38.326 [2024-11-25T10:25:32.659Z] Total : 41621.39 162.58 0.00 0.00 1532.41 644.19 5957.82 00:19:39.261 00:19:39.262 real 0m13.901s 00:19:39.262 user 0m7.067s 00:19:39.262 sys 0m6.602s 00:19:39.262 10:25:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.262 10:25:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:39.262 ************************************ 00:19:39.262 END TEST xnvme_bdevperf 00:19:39.262 ************************************ 00:19:39.262 10:25:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:39.262 10:25:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:39.262 10:25:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.262 10:25:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:39.262 ************************************ 00:19:39.262 START TEST xnvme_fio_plugin 00:19:39.262 ************************************ 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:39.262 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:39.262 { 00:19:39.262 "subsystems": [ 00:19:39.262 { 00:19:39.262 "subsystem": "bdev", 00:19:39.262 "config": [ 00:19:39.262 { 00:19:39.262 "params": { 00:19:39.262 "io_mechanism": "io_uring", 00:19:39.262 "conserve_cpu": false, 00:19:39.262 "filename": "/dev/nvme0n1", 00:19:39.262 "name": "xnvme_bdev" 00:19:39.262 }, 00:19:39.262 "method": "bdev_xnvme_create" 00:19:39.262 }, 00:19:39.262 { 00:19:39.262 "method": "bdev_wait_for_examine" 00:19:39.262 } 00:19:39.262 ] 00:19:39.262 } 00:19:39.262 ] 00:19:39.262 } 00:19:39.520 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:39.520 fio-3.35 00:19:39.520 Starting 1 thread 00:19:46.078 00:19:46.078 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71820: Mon Nov 25 10:25:39 2024 00:19:46.078 read: IOPS=48.5k, BW=190MiB/s (199MB/s)(948MiB/5002msec) 00:19:46.078 slat (usec): min=2, max=346, avg= 3.90, stdev= 2.20 00:19:46.078 clat (usec): min=774, max=2846, avg=1159.92, stdev=175.23 00:19:46.078 lat (usec): min=778, max=2857, avg=1163.82, stdev=175.76 00:19:46.078 clat percentiles (usec): 00:19:46.078 | 1.00th=[ 881], 5.00th=[ 938], 10.00th=[ 971], 20.00th=[ 1020], 00:19:46.078 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1139], 60.00th=[ 1172], 00:19:46.078 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1369], 95.00th=[ 1483], 00:19:46.078 | 99.00th=[ 1713], 99.50th=[ 1811], 99.90th=[ 2343], 99.95th=[ 2573], 00:19:46.078 | 99.99th=[ 2769] 00:19:46.078 bw ( KiB/s): min=177408, max=214016, per=100.00%, avg=194816.00, stdev=12369.73, samples=9 00:19:46.078 iops : min=44352, max=53504, avg=48704.00, stdev=3092.43, samples=9 00:19:46.078 lat (usec) : 1000=14.83% 00:19:46.078 lat (msec) : 2=84.92%, 4=0.25% 00:19:46.078 cpu : usr=37.65%, sys=60.93%, ctx=44, majf=0, minf=762 00:19:46.078 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:19:46.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.078 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:46.078 issued rwts: total=242784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.078 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.078 00:19:46.078 Run status group 0 (all jobs): 00:19:46.078 READ: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=948MiB (994MB), run=5002-5002msec 00:19:46.645 ----------------------------------------------------- 00:19:46.645 Suppressions used: 00:19:46.645 count bytes template 00:19:46.645 1 11 /usr/src/fio/parse.c 00:19:46.645 1 8 libtcmalloc_minimal.so 00:19:46.645 1 904 libcrypto.so 00:19:46.645 ----------------------------------------------------- 00:19:46.645 00:19:46.903 10:25:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:46.903 10:25:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:46.904 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:46.904 10:25:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:46.904 10:25:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:46.904 10:25:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:46.904 10:25:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:46.904 10:25:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:46.904 { 00:19:46.904 "subsystems": [ 00:19:46.904 { 00:19:46.904 "subsystem": "bdev", 00:19:46.904 "config": [ 00:19:46.904 { 00:19:46.904 "params": { 00:19:46.904 "io_mechanism": "io_uring", 00:19:46.904 "conserve_cpu": false, 00:19:46.904 "filename": "/dev/nvme0n1", 00:19:46.904 "name": "xnvme_bdev" 00:19:46.904 }, 00:19:46.904 "method": "bdev_xnvme_create" 00:19:46.904 }, 00:19:46.904 { 00:19:46.904 "method": "bdev_wait_for_examine" 00:19:46.904 } 00:19:46.904 ] 00:19:46.904 } 00:19:46.904 ] 00:19:46.904 } 00:19:47.162 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:47.162 fio-3.35 00:19:47.162 Starting 1 thread 00:19:53.761 00:19:53.761 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71917: Mon Nov 25 10:25:47 2024 00:19:53.761 write: IOPS=45.6k, BW=178MiB/s (187MB/s)(891MiB/5001msec); 0 zone resets 00:19:53.761 slat (nsec): min=2987, max=74890, avg=4662.93, stdev=2176.96 00:19:53.761 clat (usec): min=460, max=3100, avg=1220.37, stdev=201.24 00:19:53.761 lat (usec): min=464, max=3141, avg=1225.04, stdev=202.32 00:19:53.761 clat percentiles (usec): 00:19:53.761 | 1.00th=[ 914], 5.00th=[ 979], 10.00th=[ 1012], 20.00th=[ 1074], 00:19:53.761 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1188], 60.00th=[ 1221], 00:19:53.761 | 70.00th=[ 1270], 80.00th=[ 1336], 90.00th=[ 1483], 95.00th=[ 1647], 00:19:53.761 | 99.00th=[ 1909], 99.50th=[ 1975], 99.90th=[ 2114], 99.95th=[ 2212], 00:19:53.761 | 99.99th=[ 2900] 00:19:53.761 bw ( KiB/s): min=166400, max=200192, per=100.00%, avg=183523.56, stdev=12399.79, samples=9 00:19:53.761 iops : min=41600, max=50048, avg=45880.89, stdev=3099.95, samples=9 00:19:53.761 lat (usec) : 500=0.01%, 750=0.01%, 1000=7.63% 00:19:53.761 lat (msec) : 2=92.03%, 4=0.33% 00:19:53.761 cpu : usr=41.34%, sys=57.70%, ctx=11, majf=0, minf=762 00:19:53.761 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:53.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.761 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:53.761 issued rwts: total=0,228020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:53.761 00:19:53.761 Run status group 0 (all jobs): 00:19:53.761 WRITE: bw=178MiB/s (187MB/s), 178MiB/s-178MiB/s (187MB/s-187MB/s), io=891MiB (934MB), run=5001-5001msec 00:19:54.328 ----------------------------------------------------- 00:19:54.328 Suppressions used: 00:19:54.328 count bytes template 00:19:54.328 1 11 /usr/src/fio/parse.c 00:19:54.328 1 8 libtcmalloc_minimal.so 00:19:54.328 1 904 libcrypto.so 00:19:54.328 ----------------------------------------------------- 00:19:54.328 00:19:54.328 00:19:54.328 real 0m15.147s 00:19:54.328 user 0m7.935s 00:19:54.328 sys 0m6.803s 00:19:54.328 10:25:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.328 ************************************ 00:19:54.328 10:25:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:54.328 END TEST xnvme_fio_plugin 00:19:54.328 ************************************ 00:19:54.328 10:25:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:54.328 10:25:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:54.328 10:25:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:54.328 10:25:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:54.328 10:25:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:54.328 10:25:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.328 10:25:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:54.328 ************************************ 00:19:54.328 START TEST xnvme_rpc 00:19:54.328 ************************************ 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72010 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72010 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72010 ']' 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.328 10:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.586 [2024-11-25 10:25:48.785190] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:19:54.586 [2024-11-25 10:25:48.785381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72010 ] 00:19:54.845 [2024-11-25 10:25:48.967884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.845 [2024-11-25 10:25:49.147882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:56.217 xnvme_bdev 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.217 10:25:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72010 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72010 ']' 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72010 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72010 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.218 killing process with pid 72010 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72010' 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72010 00:19:56.218 10:25:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72010 00:19:58.785 00:19:58.785 real 0m4.185s 00:19:58.785 user 0m4.322s 00:19:58.785 sys 0m0.670s 00:19:58.785 10:25:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.785 10:25:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:58.785 ************************************ 00:19:58.785 END TEST xnvme_rpc 00:19:58.785 ************************************ 00:19:58.785 10:25:52 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:58.785 10:25:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:58.785 10:25:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.785 10:25:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.785 ************************************ 00:19:58.785 START TEST xnvme_bdevperf 00:19:58.785 ************************************ 00:19:58.785 10:25:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:58.785 10:25:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:58.785 10:25:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:19:58.785 10:25:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:58.785 10:25:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:58.785 10:25:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:58.785 10:25:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:58.785 10:25:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:58.785 { 00:19:58.785 "subsystems": [ 00:19:58.785 { 00:19:58.785 "subsystem": "bdev", 00:19:58.785 "config": [ 00:19:58.785 { 00:19:58.785 "params": { 00:19:58.785 "io_mechanism": "io_uring", 00:19:58.785 "conserve_cpu": true, 00:19:58.785 "filename": "/dev/nvme0n1", 00:19:58.785 "name": "xnvme_bdev" 00:19:58.785 }, 00:19:58.785 "method": "bdev_xnvme_create" 00:19:58.785 }, 00:19:58.785 { 00:19:58.785 "method": "bdev_wait_for_examine" 00:19:58.785 } 00:19:58.785 ] 00:19:58.785 } 00:19:58.785 ] 00:19:58.785 } 00:19:58.785 [2024-11-25 10:25:53.004284] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:19:58.785 [2024-11-25 10:25:53.004484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72091 ] 00:19:59.044 [2024-11-25 10:25:53.193760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.044 [2024-11-25 10:25:53.341437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.611 Running I/O for 5 seconds... 00:20:01.479 43840.00 IOPS, 171.25 MiB/s [2024-11-25T10:25:56.839Z] 46192.00 IOPS, 180.44 MiB/s [2024-11-25T10:25:57.775Z] 47413.33 IOPS, 185.21 MiB/s [2024-11-25T10:25:59.151Z] 47048.00 IOPS, 183.78 MiB/s 00:20:04.818 Latency(us) 00:20:04.818 [2024-11-25T10:25:59.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.818 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:04.818 xnvme_bdev : 5.00 46786.93 182.76 0.00 0.00 1363.76 685.15 7477.06 00:20:04.818 [2024-11-25T10:25:59.151Z] =================================================================================================================== 00:20:04.818 [2024-11-25T10:25:59.151Z] Total : 46786.93 182.76 0.00 0.00 1363.76 685.15 7477.06 00:20:05.751 10:25:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:05.751 10:25:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:05.751 10:25:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:05.751 10:25:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:05.751 10:25:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:05.751 { 00:20:05.751 "subsystems": [ 00:20:05.751 { 00:20:05.751 "subsystem": "bdev", 00:20:05.751 "config": [ 00:20:05.751 { 00:20:05.751 "params": { 00:20:05.751 "io_mechanism": "io_uring", 00:20:05.751 "conserve_cpu": true, 00:20:05.751 "filename": "/dev/nvme0n1", 00:20:05.751 "name": "xnvme_bdev" 00:20:05.751 }, 00:20:05.751 "method": "bdev_xnvme_create" 00:20:05.751 }, 00:20:05.751 { 00:20:05.752 "method": "bdev_wait_for_examine" 00:20:05.752 } 00:20:05.752 ] 00:20:05.752 } 00:20:05.752 ] 00:20:05.752 } 00:20:05.752 [2024-11-25 10:26:00.044280] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:20:05.752 [2024-11-25 10:26:00.044480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72172 ] 00:20:06.010 [2024-11-25 10:26:00.232539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.270 [2024-11-25 10:26:00.384988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.539 Running I/O for 5 seconds... 00:20:08.455 27073.00 IOPS, 105.75 MiB/s [2024-11-25T10:26:04.161Z] 31203.00 IOPS, 121.89 MiB/s [2024-11-25T10:26:05.095Z] 26321.33 IOPS, 102.82 MiB/s [2024-11-25T10:26:06.098Z] 28698.75 IOPS, 112.10 MiB/s [2024-11-25T10:26:06.098Z] 26299.80 IOPS, 102.73 MiB/s 00:20:11.765 Latency(us) 00:20:11.765 [2024-11-25T10:26:06.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.765 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:11.765 xnvme_bdev : 5.01 26272.40 102.63 0.00 0.00 2428.90 63.30 22758.87 00:20:11.765 [2024-11-25T10:26:06.098Z] =================================================================================================================== 00:20:11.765 [2024-11-25T10:26:06.098Z] Total : 26272.40 102.63 0.00 0.00 2428.90 63.30 22758.87 00:20:13.141 00:20:13.141 real 0m14.196s 00:20:13.141 user 0m8.706s 00:20:13.141 sys 0m4.170s 00:20:13.141 10:26:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.141 10:26:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:13.141 ************************************ 00:20:13.141 END TEST xnvme_bdevperf 00:20:13.141 ************************************ 00:20:13.141 10:26:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:13.141 10:26:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:13.141 10:26:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.141 10:26:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.141 ************************************ 00:20:13.141 START TEST xnvme_fio_plugin 00:20:13.141 ************************************ 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:13.141 10:26:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:13.141 { 00:20:13.141 "subsystems": [ 00:20:13.141 { 00:20:13.141 "subsystem": "bdev", 00:20:13.141 "config": [ 00:20:13.141 { 00:20:13.141 "params": { 00:20:13.141 "io_mechanism": "io_uring", 00:20:13.141 "conserve_cpu": true, 00:20:13.141 "filename": "/dev/nvme0n1", 00:20:13.141 "name": "xnvme_bdev" 00:20:13.141 }, 00:20:13.141 "method": "bdev_xnvme_create" 00:20:13.141 }, 00:20:13.141 { 00:20:13.141 "method": "bdev_wait_for_examine" 00:20:13.141 } 00:20:13.141 ] 00:20:13.141 } 00:20:13.141 ] 00:20:13.141 } 00:20:13.141 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:13.141 fio-3.35 00:20:13.141 Starting 1 thread 00:20:19.714 00:20:19.714 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72297: Mon Nov 25 10:26:13 2024 00:20:19.714 read: IOPS=48.3k, BW=189MiB/s (198MB/s)(944MiB/5001msec) 00:20:19.714 slat (usec): min=2, max=126, avg= 3.81, stdev= 1.85 00:20:19.714 clat (usec): min=811, max=3962, avg=1171.77, stdev=169.72 00:20:19.714 lat (usec): min=814, max=3970, avg=1175.58, stdev=170.14 00:20:19.714 clat percentiles (usec): 00:20:19.714 | 1.00th=[ 889], 5.00th=[ 955], 10.00th=[ 996], 20.00th=[ 1045], 00:20:19.714 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:20:19.714 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1369], 95.00th=[ 1467], 00:20:19.714 | 99.00th=[ 1745], 99.50th=[ 1827], 99.90th=[ 2024], 99.95th=[ 2180], 00:20:19.714 | 99.99th=[ 3851] 00:20:19.714 bw ( KiB/s): min=178176, max=221696, per=99.65%, avg=192512.00, stdev=13168.46, samples=9 00:20:19.714 iops : min=44544, max=55424, avg=48128.00, stdev=3292.11, samples=9 00:20:19.714 lat (usec) : 1000=11.37% 00:20:19.714 lat (msec) : 2=88.51%, 4=0.11% 00:20:19.714 cpu : usr=62.48%, sys=33.38%, ctx=25, majf=0, minf=762 00:20:19.714 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:19.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.714 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:19.714 issued rwts: total=241536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:19.714 00:20:19.714 Run status group 0 (all jobs): 00:20:19.714 READ: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=944MiB (989MB), run=5001-5001msec 00:20:20.284 ----------------------------------------------------- 00:20:20.284 Suppressions used: 00:20:20.284 count bytes template 00:20:20.284 1 11 /usr/src/fio/parse.c 00:20:20.284 1 8 libtcmalloc_minimal.so 00:20:20.284 1 904 libcrypto.so 00:20:20.284 ----------------------------------------------------- 00:20:20.284 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:20.284 10:26:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:20.542 { 00:20:20.542 "subsystems": [ 00:20:20.542 { 00:20:20.542 "subsystem": "bdev", 00:20:20.542 "config": [ 00:20:20.542 { 00:20:20.542 "params": { 00:20:20.542 "io_mechanism": "io_uring", 00:20:20.542 "conserve_cpu": true, 00:20:20.542 "filename": "/dev/nvme0n1", 00:20:20.542 "name": "xnvme_bdev" 00:20:20.542 }, 00:20:20.542 "method": "bdev_xnvme_create" 00:20:20.542 }, 00:20:20.542 { 00:20:20.542 "method": "bdev_wait_for_examine" 00:20:20.542 } 00:20:20.542 ] 00:20:20.542 } 00:20:20.542 ] 00:20:20.542 } 00:20:20.542 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:20.542 fio-3.35 00:20:20.542 Starting 1 thread 00:20:27.152 00:20:27.152 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72389: Mon Nov 25 10:26:20 2024 00:20:27.152 write: IOPS=45.8k, BW=179MiB/s (188MB/s)(894MiB/5001msec); 0 zone resets 00:20:27.152 slat (usec): min=2, max=160, avg= 4.25, stdev= 2.77 00:20:27.152 clat (usec): min=738, max=3561, avg=1228.00, stdev=170.69 00:20:27.152 lat (usec): min=741, max=3566, avg=1232.25, stdev=171.09 00:20:27.152 clat percentiles (usec): 00:20:27.152 | 1.00th=[ 971], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1090], 00:20:27.152 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1237], 00:20:27.152 | 70.00th=[ 1287], 80.00th=[ 1336], 90.00th=[ 1434], 95.00th=[ 1549], 00:20:27.152 | 99.00th=[ 1745], 99.50th=[ 1827], 99.90th=[ 2442], 99.95th=[ 3163], 00:20:27.152 | 99.99th=[ 3458] 00:20:27.152 bw ( KiB/s): min=171008, max=191488, per=100.00%, avg=183634.67, stdev=7120.78, samples=9 00:20:27.152 iops : min=42752, max=47872, avg=45908.67, stdev=1780.19, samples=9 00:20:27.152 lat (usec) : 750=0.01%, 1000=2.89% 00:20:27.152 lat (msec) : 2=96.88%, 4=0.23% 00:20:27.152 cpu : usr=61.34%, sys=34.66%, ctx=12, majf=0, minf=762 00:20:27.152 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:27.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.153 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:27.153 issued rwts: total=0,228957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.153 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:27.153 00:20:27.153 Run status group 0 (all jobs): 00:20:27.153 WRITE: bw=179MiB/s (188MB/s), 179MiB/s-179MiB/s (188MB/s-188MB/s), io=894MiB (938MB), run=5001-5001msec 00:20:28.091 ----------------------------------------------------- 00:20:28.091 Suppressions used: 00:20:28.091 count bytes template 00:20:28.091 1 11 /usr/src/fio/parse.c 00:20:28.091 1 8 libtcmalloc_minimal.so 00:20:28.091 1 904 libcrypto.so 00:20:28.091 ----------------------------------------------------- 00:20:28.091 00:20:28.091 00:20:28.091 real 0m15.027s 00:20:28.091 user 0m10.027s 00:20:28.091 sys 0m4.305s 00:20:28.091 10:26:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.091 ************************************ 00:20:28.091 END TEST xnvme_fio_plugin 00:20:28.091 ************************************ 00:20:28.091 10:26:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:28.091 10:26:22 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:28.091 10:26:22 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:20:28.091 10:26:22 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:20:28.091 10:26:22 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:20:28.091 10:26:22 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:28.091 10:26:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:28.091 10:26:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:28.091 10:26:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:28.091 10:26:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:28.091 10:26:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.091 10:26:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.091 10:26:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:28.091 ************************************ 00:20:28.091 START TEST xnvme_rpc 00:20:28.091 ************************************ 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72481 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72481 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72481 ']' 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.091 10:26:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:28.091 [2024-11-25 10:26:22.346712] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:20:28.091 [2024-11-25 10:26:22.346920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72481 ] 00:20:28.350 [2024-11-25 10:26:22.523100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.350 [2024-11-25 10:26:22.674615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.724 xnvme_bdev 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.724 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72481 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72481 ']' 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72481 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72481 00:20:29.725 killing process with pid 72481 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72481' 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72481 00:20:29.725 10:26:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72481 00:20:32.281 00:20:32.281 real 0m4.058s 00:20:32.281 user 0m4.124s 00:20:32.281 sys 0m0.682s 00:20:32.281 10:26:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.281 ************************************ 00:20:32.281 END TEST xnvme_rpc 00:20:32.281 ************************************ 00:20:32.281 10:26:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:32.281 10:26:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:32.281 10:26:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:32.281 10:26:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.281 10:26:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:32.281 ************************************ 00:20:32.281 START TEST xnvme_bdevperf 00:20:32.281 ************************************ 00:20:32.281 10:26:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:32.281 10:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:32.281 10:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:20:32.281 10:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:32.281 10:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:32.281 10:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:32.281 10:26:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:32.281 10:26:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:32.281 { 00:20:32.281 "subsystems": [ 00:20:32.281 { 00:20:32.281 "subsystem": "bdev", 00:20:32.281 "config": [ 00:20:32.281 { 00:20:32.281 "params": { 00:20:32.281 "io_mechanism": "io_uring_cmd", 00:20:32.281 "conserve_cpu": false, 00:20:32.281 "filename": "/dev/ng0n1", 00:20:32.281 "name": "xnvme_bdev" 00:20:32.281 }, 00:20:32.281 "method": "bdev_xnvme_create" 00:20:32.281 }, 00:20:32.281 { 00:20:32.281 "method": "bdev_wait_for_examine" 00:20:32.281 } 00:20:32.281 ] 00:20:32.281 } 00:20:32.281 ] 00:20:32.281 } 00:20:32.281 [2024-11-25 10:26:26.458666] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:20:32.281 [2024-11-25 10:26:26.458862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72565 ] 00:20:32.539 [2024-11-25 10:26:26.653225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.539 [2024-11-25 10:26:26.819635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.106 Running I/O for 5 seconds... 00:20:34.981 54784.00 IOPS, 214.00 MiB/s [2024-11-25T10:26:30.269Z] 54272.00 IOPS, 212.00 MiB/s [2024-11-25T10:26:31.646Z] 54570.67 IOPS, 213.17 MiB/s [2024-11-25T10:26:32.213Z] 53920.00 IOPS, 210.62 MiB/s [2024-11-25T10:26:32.213Z] 54694.40 IOPS, 213.65 MiB/s 00:20:37.880 Latency(us) 00:20:37.880 [2024-11-25T10:26:32.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.880 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:37.880 xnvme_bdev : 5.00 54680.93 213.60 0.00 0.00 1166.87 822.92 3738.53 00:20:37.880 [2024-11-25T10:26:32.213Z] =================================================================================================================== 00:20:37.880 [2024-11-25T10:26:32.213Z] Total : 54680.93 213.60 0.00 0.00 1166.87 822.92 3738.53 00:20:39.255 10:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:39.255 10:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:39.255 10:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:39.255 10:26:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:39.255 10:26:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:39.255 { 00:20:39.255 "subsystems": [ 00:20:39.255 { 00:20:39.255 "subsystem": "bdev", 00:20:39.255 "config": [ 00:20:39.255 { 00:20:39.255 "params": { 00:20:39.255 "io_mechanism": "io_uring_cmd", 00:20:39.255 "conserve_cpu": false, 00:20:39.255 "filename": "/dev/ng0n1", 00:20:39.255 "name": "xnvme_bdev" 00:20:39.255 }, 00:20:39.255 "method": "bdev_xnvme_create" 00:20:39.255 }, 00:20:39.255 { 00:20:39.255 "method": "bdev_wait_for_examine" 00:20:39.255 } 00:20:39.255 ] 00:20:39.255 } 00:20:39.255 ] 00:20:39.255 } 00:20:39.255 [2024-11-25 10:26:33.469648] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:20:39.255 [2024-11-25 10:26:33.469861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72644 ] 00:20:39.514 [2024-11-25 10:26:33.665699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.514 [2024-11-25 10:26:33.821054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.081 Running I/O for 5 seconds... 00:20:41.953 49408.00 IOPS, 193.00 MiB/s [2024-11-25T10:26:37.221Z] 47474.50 IOPS, 185.45 MiB/s [2024-11-25T10:26:38.597Z] 46881.67 IOPS, 183.13 MiB/s [2024-11-25T10:26:39.532Z] 46505.25 IOPS, 181.66 MiB/s 00:20:45.199 Latency(us) 00:20:45.199 [2024-11-25T10:26:39.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.199 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:45.199 xnvme_bdev : 5.00 46396.63 181.24 0.00 0.00 1374.80 744.73 4617.31 00:20:45.199 [2024-11-25T10:26:39.532Z] =================================================================================================================== 00:20:45.199 [2024-11-25T10:26:39.532Z] Total : 46396.63 181.24 0.00 0.00 1374.80 744.73 4617.31 00:20:46.132 10:26:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:46.132 10:26:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:20:46.132 10:26:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:46.132 10:26:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:46.132 10:26:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:46.132 { 00:20:46.132 "subsystems": [ 00:20:46.132 { 00:20:46.132 "subsystem": "bdev", 00:20:46.132 "config": [ 00:20:46.132 { 00:20:46.132 "params": { 00:20:46.132 "io_mechanism": "io_uring_cmd", 00:20:46.132 "conserve_cpu": false, 00:20:46.132 "filename": "/dev/ng0n1", 00:20:46.132 "name": "xnvme_bdev" 00:20:46.132 }, 00:20:46.132 "method": "bdev_xnvme_create" 00:20:46.132 }, 00:20:46.132 { 00:20:46.132 "method": "bdev_wait_for_examine" 00:20:46.132 } 00:20:46.132 ] 00:20:46.132 } 00:20:46.132 ] 00:20:46.132 } 00:20:46.132 [2024-11-25 10:26:40.402716] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:20:46.132 [2024-11-25 10:26:40.402871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72720 ] 00:20:46.390 [2024-11-25 10:26:40.581328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.390 [2024-11-25 10:26:40.713101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.956 Running I/O for 5 seconds... 00:20:48.826 65280.00 IOPS, 255.00 MiB/s [2024-11-25T10:26:44.096Z] 66656.00 IOPS, 260.38 MiB/s [2024-11-25T10:26:45.471Z] 66368.00 IOPS, 259.25 MiB/s [2024-11-25T10:26:46.407Z] 67424.00 IOPS, 263.38 MiB/s 00:20:52.074 Latency(us) 00:20:52.074 [2024-11-25T10:26:46.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.074 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:20:52.074 xnvme_bdev : 5.00 67597.00 264.05 0.00 0.00 942.59 543.65 4796.04 00:20:52.074 [2024-11-25T10:26:46.407Z] =================================================================================================================== 00:20:52.074 [2024-11-25T10:26:46.407Z] Total : 67597.00 264.05 0.00 0.00 942.59 543.65 4796.04 00:20:53.007 10:26:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:53.007 10:26:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:20:53.007 10:26:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:53.007 10:26:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:53.007 10:26:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.007 { 00:20:53.007 "subsystems": [ 00:20:53.007 { 00:20:53.007 "subsystem": "bdev", 00:20:53.007 "config": [ 00:20:53.007 { 00:20:53.007 "params": { 00:20:53.007 "io_mechanism": "io_uring_cmd", 00:20:53.007 "conserve_cpu": false, 00:20:53.007 "filename": "/dev/ng0n1", 00:20:53.007 "name": "xnvme_bdev" 00:20:53.007 }, 00:20:53.007 "method": "bdev_xnvme_create" 00:20:53.007 }, 00:20:53.007 { 00:20:53.007 "method": "bdev_wait_for_examine" 00:20:53.007 } 00:20:53.007 ] 00:20:53.007 } 00:20:53.007 ] 00:20:53.007 } 00:20:53.007 [2024-11-25 10:26:47.295875] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:20:53.007 [2024-11-25 10:26:47.296073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72801 ] 00:20:53.264 [2024-11-25 10:26:47.476471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.521 [2024-11-25 10:26:47.623789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.778 Running I/O for 5 seconds... 00:20:56.086 42945.00 IOPS, 167.75 MiB/s [2024-11-25T10:26:51.359Z] 32638.50 IOPS, 127.49 MiB/s [2024-11-25T10:26:52.295Z] 24641.00 IOPS, 96.25 MiB/s [2024-11-25T10:26:53.231Z] 20550.75 IOPS, 80.28 MiB/s [2024-11-25T10:26:53.231Z] 21757.20 IOPS, 84.99 MiB/s 00:20:58.898 Latency(us) 00:20:58.898 [2024-11-25T10:26:53.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.898 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:20:58.898 xnvme_bdev : 5.01 21755.58 84.98 0.00 0.00 2936.65 84.25 46709.29 00:20:58.898 [2024-11-25T10:26:53.231Z] =================================================================================================================== 00:20:58.898 [2024-11-25T10:26:53.231Z] Total : 21755.58 84.98 0.00 0.00 2936.65 84.25 46709.29 00:21:00.298 00:21:00.298 real 0m27.944s 00:21:00.298 user 0m15.980s 00:21:00.298 sys 0m11.547s 00:21:00.298 10:26:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.298 10:26:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:00.298 ************************************ 00:21:00.298 END TEST xnvme_bdevperf 00:21:00.298 ************************************ 00:21:00.298 10:26:54 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:00.298 10:26:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.298 10:26:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.298 10:26:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:00.298 ************************************ 00:21:00.298 START TEST xnvme_fio_plugin 00:21:00.298 ************************************ 00:21:00.298 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:00.298 10:26:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:00.298 10:26:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:00.298 10:26:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:00.299 10:26:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:00.299 { 00:21:00.299 "subsystems": [ 00:21:00.299 { 00:21:00.299 "subsystem": "bdev", 00:21:00.299 "config": [ 00:21:00.299 { 00:21:00.299 "params": { 00:21:00.299 "io_mechanism": "io_uring_cmd", 00:21:00.299 "conserve_cpu": false, 00:21:00.299 "filename": "/dev/ng0n1", 00:21:00.299 "name": "xnvme_bdev" 00:21:00.299 }, 00:21:00.299 "method": "bdev_xnvme_create" 00:21:00.299 }, 00:21:00.299 { 00:21:00.299 "method": "bdev_wait_for_examine" 00:21:00.299 } 00:21:00.299 ] 00:21:00.299 } 00:21:00.299 ] 00:21:00.299 } 00:21:00.564 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:00.564 fio-3.35 00:21:00.564 Starting 1 thread 00:21:07.123 00:21:07.123 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72925: Mon Nov 25 10:27:00 2024 00:21:07.123 read: IOPS=52.7k, BW=206MiB/s (216MB/s)(1030MiB/5001msec) 00:21:07.123 slat (nsec): min=2581, max=46457, avg=3627.83, stdev=1460.76 00:21:07.123 clat (usec): min=710, max=4492, avg=1069.96, stdev=158.84 00:21:07.123 lat (usec): min=713, max=4504, avg=1073.59, stdev=159.14 00:21:07.123 clat percentiles (usec): 00:21:07.123 | 1.00th=[ 824], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 955], 00:21:07.123 | 30.00th=[ 988], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1090], 00:21:07.123 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1254], 95.00th=[ 1336], 00:21:07.123 | 99.00th=[ 1532], 99.50th=[ 1614], 99.90th=[ 2040], 99.95th=[ 2573], 00:21:07.123 | 99.99th=[ 4359] 00:21:07.123 bw ( KiB/s): min=194048, max=231424, per=100.00%, avg=211911.11, stdev=13593.20, samples=9 00:21:07.123 iops : min=48512, max=57856, avg=52977.78, stdev=3398.30, samples=9 00:21:07.123 lat (usec) : 750=0.01%, 1000=34.20% 00:21:07.123 lat (msec) : 2=65.67%, 4=0.09%, 10=0.02% 00:21:07.123 cpu : usr=41.44%, sys=57.68%, ctx=13, majf=0, minf=762 00:21:07.123 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:07.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.123 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:07.123 issued rwts: total=263552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:07.123 00:21:07.123 Run status group 0 (all jobs): 00:21:07.123 READ: bw=206MiB/s (216MB/s), 206MiB/s-206MiB/s (216MB/s-216MB/s), io=1030MiB (1080MB), run=5001-5001msec 00:21:07.690 ----------------------------------------------------- 00:21:07.690 Suppressions used: 00:21:07.690 count bytes template 00:21:07.690 1 11 /usr/src/fio/parse.c 00:21:07.690 1 8 libtcmalloc_minimal.so 00:21:07.690 1 904 libcrypto.so 00:21:07.690 ----------------------------------------------------- 00:21:07.690 00:21:07.690 10:27:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:07.690 10:27:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:07.690 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:07.690 10:27:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:07.690 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:07.690 10:27:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:07.691 10:27:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:07.691 { 00:21:07.691 "subsystems": [ 00:21:07.691 { 00:21:07.691 "subsystem": "bdev", 00:21:07.691 "config": [ 00:21:07.691 { 00:21:07.691 "params": { 00:21:07.691 "io_mechanism": "io_uring_cmd", 00:21:07.691 "conserve_cpu": false, 00:21:07.691 "filename": "/dev/ng0n1", 00:21:07.691 "name": "xnvme_bdev" 00:21:07.691 }, 00:21:07.691 "method": "bdev_xnvme_create" 00:21:07.691 }, 00:21:07.691 { 00:21:07.691 "method": "bdev_wait_for_examine" 00:21:07.691 } 00:21:07.691 ] 00:21:07.691 } 00:21:07.691 ] 00:21:07.691 } 00:21:07.949 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:07.949 fio-3.35 00:21:07.949 Starting 1 thread 00:21:14.507 00:21:14.507 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73020: Mon Nov 25 10:27:07 2024 00:21:14.507 write: IOPS=48.5k, BW=189MiB/s (199MB/s)(947MiB/5002msec); 0 zone resets 00:21:14.507 slat (usec): min=2, max=244, avg= 4.44, stdev= 2.19 00:21:14.507 clat (usec): min=777, max=2340, avg=1145.03, stdev=175.60 00:21:14.508 lat (usec): min=781, max=2369, avg=1149.47, stdev=176.45 00:21:14.508 clat percentiles (usec): 00:21:14.508 | 1.00th=[ 873], 5.00th=[ 930], 10.00th=[ 963], 20.00th=[ 1012], 00:21:14.508 | 30.00th=[ 1045], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1156], 00:21:14.508 | 70.00th=[ 1188], 80.00th=[ 1254], 90.00th=[ 1352], 95.00th=[ 1483], 00:21:14.508 | 99.00th=[ 1745], 99.50th=[ 1926], 99.90th=[ 2147], 99.95th=[ 2212], 00:21:14.508 | 99.99th=[ 2278] 00:21:14.508 bw ( KiB/s): min=189952, max=203264, per=100.00%, avg=195470.22, stdev=4357.85, samples=9 00:21:14.508 iops : min=47488, max=50816, avg=48867.56, stdev=1089.46, samples=9 00:21:14.508 lat (usec) : 1000=17.91% 00:21:14.508 lat (msec) : 2=81.73%, 4=0.35% 00:21:14.508 cpu : usr=45.35%, sys=53.65%, ctx=11, majf=0, minf=762 00:21:14.508 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:14.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.508 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:14.508 issued rwts: total=0,242496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:14.508 00:21:14.508 Run status group 0 (all jobs): 00:21:14.508 WRITE: bw=189MiB/s (199MB/s), 189MiB/s-189MiB/s (199MB/s-199MB/s), io=947MiB (993MB), run=5002-5002msec 00:21:15.072 ----------------------------------------------------- 00:21:15.072 Suppressions used: 00:21:15.072 count bytes template 00:21:15.072 1 11 /usr/src/fio/parse.c 00:21:15.072 1 8 libtcmalloc_minimal.so 00:21:15.072 1 904 libcrypto.so 00:21:15.072 ----------------------------------------------------- 00:21:15.072 00:21:15.072 00:21:15.072 real 0m14.899s 00:21:15.072 user 0m8.165s 00:21:15.072 sys 0m6.377s 00:21:15.072 10:27:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.072 10:27:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:15.072 ************************************ 00:21:15.072 END TEST xnvme_fio_plugin 00:21:15.072 ************************************ 00:21:15.072 10:27:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:15.072 10:27:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:21:15.072 10:27:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:21:15.072 10:27:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:15.072 10:27:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:15.072 10:27:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.072 10:27:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:15.072 ************************************ 00:21:15.073 START TEST xnvme_rpc 00:21:15.073 ************************************ 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73111 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73111 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73111 ']' 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.073 10:27:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:15.331 [2024-11-25 10:27:09.414708] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:21:15.331 [2024-11-25 10:27:09.414911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73111 ] 00:21:15.331 [2024-11-25 10:27:09.601939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.589 [2024-11-25 10:27:09.735531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:16.523 xnvme_bdev 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73111 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73111 ']' 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73111 00:21:16.523 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:16.524 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.524 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73111 00:21:16.524 killing process with pid 73111 00:21:16.524 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.524 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.524 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73111' 00:21:16.524 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73111 00:21:16.524 10:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73111 00:21:19.051 ************************************ 00:21:19.051 END TEST xnvme_rpc 00:21:19.051 ************************************ 00:21:19.051 00:21:19.051 real 0m3.761s 00:21:19.051 user 0m3.867s 00:21:19.051 sys 0m0.542s 00:21:19.051 10:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.051 10:27:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.051 10:27:13 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:19.051 10:27:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:19.051 10:27:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.051 10:27:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:19.051 ************************************ 00:21:19.051 START TEST xnvme_bdevperf 00:21:19.051 ************************************ 00:21:19.051 10:27:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:19.051 10:27:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:19.051 10:27:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:21:19.051 10:27:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:19.051 10:27:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:19.051 10:27:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:19.051 10:27:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:19.051 10:27:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:19.051 { 00:21:19.051 "subsystems": [ 00:21:19.051 { 00:21:19.051 "subsystem": "bdev", 00:21:19.051 "config": [ 00:21:19.051 { 00:21:19.051 "params": { 00:21:19.051 "io_mechanism": "io_uring_cmd", 00:21:19.051 "conserve_cpu": true, 00:21:19.051 "filename": "/dev/ng0n1", 00:21:19.051 "name": "xnvme_bdev" 00:21:19.051 }, 00:21:19.051 "method": "bdev_xnvme_create" 00:21:19.051 }, 00:21:19.051 { 00:21:19.051 "method": "bdev_wait_for_examine" 00:21:19.051 } 00:21:19.051 ] 00:21:19.051 } 00:21:19.051 ] 00:21:19.051 } 00:21:19.051 [2024-11-25 10:27:13.189937] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:21:19.051 [2024-11-25 10:27:13.190093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73185 ] 00:21:19.051 [2024-11-25 10:27:13.370720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.310 [2024-11-25 10:27:13.521121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.568 Running I/O for 5 seconds... 00:21:21.871 55296.00 IOPS, 216.00 MiB/s [2024-11-25T10:27:17.138Z] 55520.00 IOPS, 216.88 MiB/s [2024-11-25T10:27:18.073Z] 55573.33 IOPS, 217.08 MiB/s [2024-11-25T10:27:19.008Z] 56304.00 IOPS, 219.94 MiB/s 00:21:24.675 Latency(us) 00:21:24.675 [2024-11-25T10:27:19.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.675 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:24.675 xnvme_bdev : 5.00 56121.24 219.22 0.00 0.00 1136.73 755.90 3932.16 00:21:24.675 [2024-11-25T10:27:19.008Z] =================================================================================================================== 00:21:24.675 [2024-11-25T10:27:19.008Z] Total : 56121.24 219.22 0.00 0.00 1136.73 755.90 3932.16 00:21:25.611 10:27:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:25.611 10:27:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:25.611 10:27:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:25.611 10:27:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:25.611 10:27:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:25.870 { 00:21:25.870 "subsystems": [ 00:21:25.870 { 00:21:25.870 "subsystem": "bdev", 00:21:25.870 "config": [ 00:21:25.870 { 00:21:25.870 "params": { 00:21:25.870 "io_mechanism": "io_uring_cmd", 00:21:25.870 "conserve_cpu": true, 00:21:25.870 "filename": "/dev/ng0n1", 00:21:25.870 "name": "xnvme_bdev" 00:21:25.870 }, 00:21:25.870 "method": "bdev_xnvme_create" 00:21:25.870 }, 00:21:25.870 { 00:21:25.870 "method": "bdev_wait_for_examine" 00:21:25.870 } 00:21:25.870 ] 00:21:25.870 } 00:21:25.870 ] 00:21:25.870 } 00:21:25.870 [2024-11-25 10:27:20.041517] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:21:25.870 [2024-11-25 10:27:20.041692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73265 ] 00:21:26.129 [2024-11-25 10:27:20.227574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.129 [2024-11-25 10:27:20.353483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.387 Running I/O for 5 seconds... 00:21:28.716 45567.00 IOPS, 178.00 MiB/s [2024-11-25T10:27:23.981Z] 46495.50 IOPS, 181.62 MiB/s [2024-11-25T10:27:24.915Z] 47151.00 IOPS, 184.18 MiB/s [2024-11-25T10:27:25.848Z] 45196.00 IOPS, 176.55 MiB/s [2024-11-25T10:27:25.848Z] 43757.60 IOPS, 170.93 MiB/s 00:21:31.515 Latency(us) 00:21:31.515 [2024-11-25T10:27:25.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.515 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:31.515 xnvme_bdev : 5.01 43719.14 170.78 0.00 0.00 1458.50 65.16 10307.03 00:21:31.515 [2024-11-25T10:27:25.848Z] =================================================================================================================== 00:21:31.515 [2024-11-25T10:27:25.848Z] Total : 43719.14 170.78 0.00 0.00 1458.50 65.16 10307.03 00:21:32.896 10:27:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:32.896 10:27:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:32.896 10:27:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:21:32.896 10:27:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:32.896 10:27:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:32.896 { 00:21:32.896 "subsystems": [ 00:21:32.896 { 00:21:32.896 "subsystem": "bdev", 00:21:32.896 "config": [ 00:21:32.896 { 00:21:32.896 "params": { 00:21:32.896 "io_mechanism": "io_uring_cmd", 00:21:32.896 "conserve_cpu": true, 00:21:32.896 "filename": "/dev/ng0n1", 00:21:32.896 "name": "xnvme_bdev" 00:21:32.896 }, 00:21:32.896 "method": "bdev_xnvme_create" 00:21:32.896 }, 00:21:32.896 { 00:21:32.896 "method": "bdev_wait_for_examine" 00:21:32.896 } 00:21:32.896 ] 00:21:32.896 } 00:21:32.896 ] 00:21:32.896 } 00:21:32.896 [2024-11-25 10:27:26.925898] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:21:32.897 [2024-11-25 10:27:26.926042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73344 ] 00:21:32.897 [2024-11-25 10:27:27.101065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.154 [2024-11-25 10:27:27.253987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.411 Running I/O for 5 seconds... 00:21:35.718 72768.00 IOPS, 284.25 MiB/s [2024-11-25T10:27:30.983Z] 72000.00 IOPS, 281.25 MiB/s [2024-11-25T10:27:31.917Z] 71765.33 IOPS, 280.33 MiB/s [2024-11-25T10:27:32.848Z] 71952.00 IOPS, 281.06 MiB/s [2024-11-25T10:27:32.848Z] 72102.40 IOPS, 281.65 MiB/s 00:21:38.515 Latency(us) 00:21:38.515 [2024-11-25T10:27:32.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.515 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:21:38.515 xnvme_bdev : 5.00 72074.92 281.54 0.00 0.00 883.99 472.90 2949.12 00:21:38.515 [2024-11-25T10:27:32.848Z] =================================================================================================================== 00:21:38.515 [2024-11-25T10:27:32.848Z] Total : 72074.92 281.54 0.00 0.00 883.99 472.90 2949.12 00:21:39.886 10:27:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:39.886 10:27:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:39.886 10:27:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:21:39.886 10:27:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:39.886 10:27:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:39.886 { 00:21:39.886 "subsystems": [ 00:21:39.886 { 00:21:39.886 "subsystem": "bdev", 00:21:39.886 "config": [ 00:21:39.886 { 00:21:39.886 "params": { 00:21:39.886 "io_mechanism": "io_uring_cmd", 00:21:39.886 "conserve_cpu": true, 00:21:39.886 "filename": "/dev/ng0n1", 00:21:39.886 "name": "xnvme_bdev" 00:21:39.886 }, 00:21:39.886 "method": "bdev_xnvme_create" 00:21:39.886 }, 00:21:39.886 { 00:21:39.886 "method": "bdev_wait_for_examine" 00:21:39.886 } 00:21:39.886 ] 00:21:39.886 } 00:21:39.886 ] 00:21:39.886 } 00:21:39.886 [2024-11-25 10:27:33.875051] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:21:39.886 [2024-11-25 10:27:33.875194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73420 ] 00:21:39.886 [2024-11-25 10:27:34.051293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.886 [2024-11-25 10:27:34.201952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.451 Running I/O for 5 seconds... 00:21:42.331 42896.00 IOPS, 167.56 MiB/s [2024-11-25T10:27:37.597Z] 41637.00 IOPS, 162.64 MiB/s [2024-11-25T10:27:38.969Z] 40143.67 IOPS, 156.81 MiB/s [2024-11-25T10:27:39.903Z] 39667.00 IOPS, 154.95 MiB/s 00:21:45.570 Latency(us) 00:21:45.570 [2024-11-25T10:27:39.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.570 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:45.570 xnvme_bdev : 5.00 39424.57 154.00 0.00 0.00 1615.21 208.52 9770.82 00:21:45.570 [2024-11-25T10:27:39.903Z] =================================================================================================================== 00:21:45.570 [2024-11-25T10:27:39.903Z] Total : 39424.57 154.00 0.00 0.00 1615.21 208.52 9770.82 00:21:46.505 00:21:46.505 real 0m27.704s 00:21:46.505 user 0m18.906s 00:21:46.505 sys 0m6.547s 00:21:46.505 10:27:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.505 10:27:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:46.505 ************************************ 00:21:46.505 END TEST xnvme_bdevperf 00:21:46.505 ************************************ 00:21:46.776 10:27:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:46.776 10:27:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:46.776 10:27:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.776 10:27:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:46.776 ************************************ 00:21:46.776 START TEST xnvme_fio_plugin 00:21:46.776 ************************************ 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:46.776 10:27:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:46.776 { 00:21:46.776 "subsystems": [ 00:21:46.776 { 00:21:46.776 "subsystem": "bdev", 00:21:46.776 "config": [ 00:21:46.776 { 00:21:46.776 "params": { 00:21:46.776 "io_mechanism": "io_uring_cmd", 00:21:46.776 "conserve_cpu": true, 00:21:46.776 "filename": "/dev/ng0n1", 00:21:46.776 "name": "xnvme_bdev" 00:21:46.776 }, 00:21:46.776 "method": "bdev_xnvme_create" 00:21:46.776 }, 00:21:46.776 { 00:21:46.776 "method": "bdev_wait_for_examine" 00:21:46.776 } 00:21:46.776 ] 00:21:46.776 } 00:21:46.776 ] 00:21:46.776 } 00:21:46.776 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:46.776 fio-3.35 00:21:46.776 Starting 1 thread 00:21:53.371 00:21:53.371 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73545: Mon Nov 25 10:27:46 2024 00:21:53.371 read: IOPS=51.9k, BW=203MiB/s (213MB/s)(1014MiB/5001msec) 00:21:53.371 slat (usec): min=2, max=109, avg= 3.87, stdev= 1.56 00:21:53.371 clat (usec): min=706, max=2373, avg=1078.48, stdev=160.28 00:21:53.371 lat (usec): min=709, max=2378, avg=1082.35, stdev=160.76 00:21:53.371 clat percentiles (usec): 00:21:53.371 | 1.00th=[ 807], 5.00th=[ 857], 10.00th=[ 889], 20.00th=[ 938], 00:21:53.371 | 30.00th=[ 979], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1106], 00:21:53.371 | 70.00th=[ 1139], 80.00th=[ 1205], 90.00th=[ 1287], 95.00th=[ 1369], 00:21:53.371 | 99.00th=[ 1532], 99.50th=[ 1598], 99.90th=[ 1811], 99.95th=[ 1958], 00:21:53.371 | 99.99th=[ 2278] 00:21:53.371 bw ( KiB/s): min=189440, max=220160, per=100.00%, avg=207928.89, stdev=10157.89, samples=9 00:21:53.371 iops : min=47360, max=55040, avg=51982.22, stdev=2539.47, samples=9 00:21:53.371 lat (usec) : 750=0.04%, 1000=35.03% 00:21:53.371 lat (msec) : 2=64.90%, 4=0.04% 00:21:53.371 cpu : usr=69.66%, sys=27.40%, ctx=13, majf=0, minf=762 00:21:53.371 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:53.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.371 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:53.371 issued rwts: total=259520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.371 00:21:53.371 Run status group 0 (all jobs): 00:21:53.372 READ: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=1014MiB (1063MB), run=5001-5001msec 00:21:53.938 ----------------------------------------------------- 00:21:53.938 Suppressions used: 00:21:53.938 count bytes template 00:21:53.938 1 11 /usr/src/fio/parse.c 00:21:53.938 1 8 libtcmalloc_minimal.so 00:21:53.938 1 904 libcrypto.so 00:21:53.938 ----------------------------------------------------- 00:21:53.938 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:54.196 10:27:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:54.196 { 00:21:54.196 "subsystems": [ 00:21:54.196 { 00:21:54.196 "subsystem": "bdev", 00:21:54.196 "config": [ 00:21:54.196 { 00:21:54.196 "params": { 00:21:54.196 "io_mechanism": "io_uring_cmd", 00:21:54.196 "conserve_cpu": true, 00:21:54.196 "filename": "/dev/ng0n1", 00:21:54.196 "name": "xnvme_bdev" 00:21:54.196 }, 00:21:54.196 "method": "bdev_xnvme_create" 00:21:54.196 }, 00:21:54.196 { 00:21:54.196 "method": "bdev_wait_for_examine" 00:21:54.196 } 00:21:54.196 ] 00:21:54.196 } 00:21:54.196 ] 00:21:54.196 } 00:21:54.196 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:54.196 fio-3.35 00:21:54.196 Starting 1 thread 00:22:00.758 00:22:00.758 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73636: Mon Nov 25 10:27:54 2024 00:22:00.758 write: IOPS=45.5k, BW=178MiB/s (186MB/s)(889MiB/5001msec); 0 zone resets 00:22:00.758 slat (nsec): min=2533, max=75253, avg=4886.25, stdev=3153.61 00:22:00.758 clat (usec): min=212, max=3914, avg=1211.94, stdev=255.26 00:22:00.758 lat (usec): min=216, max=3924, avg=1216.83, stdev=257.02 00:22:00.758 clat percentiles (usec): 00:22:00.758 | 1.00th=[ 848], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1037], 00:22:00.758 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1172], 60.00th=[ 1221], 00:22:00.758 | 70.00th=[ 1270], 80.00th=[ 1336], 90.00th=[ 1467], 95.00th=[ 1614], 00:22:00.758 | 99.00th=[ 2245], 99.50th=[ 2573], 99.90th=[ 3261], 99.95th=[ 3458], 00:22:00.758 | 99.99th=[ 3818] 00:22:00.758 bw ( KiB/s): min=171008, max=202240, per=100.00%, avg=182632.89, stdev=8901.05, samples=9 00:22:00.758 iops : min=42752, max=50560, avg=45658.22, stdev=2225.40, samples=9 00:22:00.758 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=13.91% 00:22:00.758 lat (msec) : 2=84.55%, 4=1.52% 00:22:00.758 cpu : usr=68.08%, sys=28.84%, ctx=12, majf=0, minf=762 00:22:00.758 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:00.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.758 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:00.758 issued rwts: total=0,227607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.758 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:00.758 00:22:00.758 Run status group 0 (all jobs): 00:22:00.758 WRITE: bw=178MiB/s (186MB/s), 178MiB/s-178MiB/s (186MB/s-186MB/s), io=889MiB (932MB), run=5001-5001msec 00:22:01.696 ----------------------------------------------------- 00:22:01.696 Suppressions used: 00:22:01.696 count bytes template 00:22:01.696 1 11 /usr/src/fio/parse.c 00:22:01.696 1 8 libtcmalloc_minimal.so 00:22:01.696 1 904 libcrypto.so 00:22:01.696 ----------------------------------------------------- 00:22:01.696 00:22:01.696 00:22:01.696 real 0m14.845s 00:22:01.696 user 0m10.705s 00:22:01.696 sys 0m3.560s 00:22:01.696 ************************************ 00:22:01.696 END TEST xnvme_fio_plugin 00:22:01.696 ************************************ 00:22:01.696 10:27:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.696 10:27:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:01.696 Process with pid 73111 is not found 00:22:01.696 10:27:55 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73111 00:22:01.696 10:27:55 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73111 ']' 00:22:01.696 10:27:55 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73111 00:22:01.696 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73111) - No such process 00:22:01.696 10:27:55 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73111 is not found' 00:22:01.696 00:22:01.696 real 3m53.957s 00:22:01.696 user 2m19.704s 00:22:01.696 sys 1m17.712s 00:22:01.696 ************************************ 00:22:01.696 END TEST nvme_xnvme 00:22:01.696 ************************************ 00:22:01.696 10:27:55 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.696 10:27:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:01.696 10:27:55 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:01.696 10:27:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:01.696 10:27:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.696 10:27:55 -- common/autotest_common.sh@10 -- # set +x 00:22:01.696 ************************************ 00:22:01.696 START TEST blockdev_xnvme 00:22:01.696 ************************************ 00:22:01.696 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:01.696 * Looking for test storage... 00:22:01.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:01.696 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:01.696 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:22:01.696 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:01.696 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.696 10:27:55 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:22:01.696 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.696 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:01.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.696 --rc genhtml_branch_coverage=1 00:22:01.696 --rc genhtml_function_coverage=1 00:22:01.696 --rc genhtml_legend=1 00:22:01.696 --rc geninfo_all_blocks=1 00:22:01.696 --rc geninfo_unexecuted_blocks=1 00:22:01.696 00:22:01.696 ' 00:22:01.696 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.697 --rc genhtml_branch_coverage=1 00:22:01.697 --rc genhtml_function_coverage=1 00:22:01.697 --rc genhtml_legend=1 00:22:01.697 --rc geninfo_all_blocks=1 00:22:01.697 --rc geninfo_unexecuted_blocks=1 00:22:01.697 00:22:01.697 ' 00:22:01.697 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.697 --rc genhtml_branch_coverage=1 00:22:01.697 --rc genhtml_function_coverage=1 00:22:01.697 --rc genhtml_legend=1 00:22:01.697 --rc geninfo_all_blocks=1 00:22:01.697 --rc geninfo_unexecuted_blocks=1 00:22:01.697 00:22:01.697 ' 00:22:01.697 10:27:55 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.697 --rc genhtml_branch_coverage=1 00:22:01.697 --rc genhtml_function_coverage=1 00:22:01.697 --rc genhtml_legend=1 00:22:01.697 --rc geninfo_all_blocks=1 00:22:01.697 --rc geninfo_unexecuted_blocks=1 00:22:01.697 00:22:01.697 ' 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:22:01.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73770 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73770 00:22:01.697 10:27:55 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73770 ']' 00:22:01.697 10:27:55 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:01.697 10:27:55 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.697 10:27:55 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.697 10:27:55 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.697 10:27:55 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.697 10:27:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:01.956 [2024-11-25 10:27:56.116173] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:22:01.956 [2024-11-25 10:27:56.116599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73770 ] 00:22:02.215 [2024-11-25 10:27:56.301952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.215 [2024-11-25 10:27:56.467284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.151 10:27:57 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.151 10:27:57 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:22:03.151 10:27:57 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:22:03.151 10:27:57 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:22:03.151 10:27:57 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:22:03.151 10:27:57 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:22:03.151 10:27:57 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:03.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:04.285 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:04.285 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:04.285 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:22:04.285 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:22:04.285 10:27:58 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:04.285 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:22:04.286 nvme0n1 00:22:04.286 nvme0n2 00:22:04.286 nvme0n3 00:22:04.286 nvme1n1 00:22:04.286 nvme2n1 00:22:04.286 nvme3n1 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.286 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:22:04.286 10:27:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.545 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:22:04.545 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:22:04.545 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "132bf61a-785d-4c02-ba34-9a2f3b84aa32"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "132bf61a-785d-4c02-ba34-9a2f3b84aa32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ce1faeab-f347-4d0a-8469-d54fa686b344"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce1faeab-f347-4d0a-8469-d54fa686b344",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "92e30a1a-07db-4d5a-a069-883953dcaba6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92e30a1a-07db-4d5a-a069-883953dcaba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "11a745ff-ec93-40e4-a319-b5794c6e729b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "11a745ff-ec93-40e4-a319-b5794c6e729b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0d518fb6-16e8-4b22-b2e3-505b5bcb0c5a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0d518fb6-16e8-4b22-b2e3-505b5bcb0c5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2b2102af-49e9-4e7c-91bf-13e89b701d2c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2b2102af-49e9-4e7c-91bf-13e89b701d2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:04.545 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:22:04.545 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:22:04.545 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:22:04.545 10:27:58 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73770 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73770 ']' 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73770 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73770 00:22:04.545 killing process with pid 73770 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73770' 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73770 00:22:04.545 10:27:58 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73770 00:22:07.074 10:28:00 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:07.074 10:28:00 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:07.074 10:28:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:07.074 10:28:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.074 10:28:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:07.074 ************************************ 00:22:07.074 START TEST bdev_hello_world 00:22:07.074 ************************************ 00:22:07.074 10:28:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:07.074 [2024-11-25 10:28:01.100387] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:22:07.074 [2024-11-25 10:28:01.100859] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74060 ] 00:22:07.074 [2024-11-25 10:28:01.308496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.332 [2024-11-25 10:28:01.441812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.590 [2024-11-25 10:28:01.877506] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:07.590 [2024-11-25 10:28:01.877760] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:22:07.590 [2024-11-25 10:28:01.877810] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:07.590 [2024-11-25 10:28:01.880246] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:07.590 [2024-11-25 10:28:01.880554] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:07.590 [2024-11-25 10:28:01.880595] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:07.590 [2024-11-25 10:28:01.880795] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:07.590 00:22:07.590 [2024-11-25 10:28:01.880831] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:08.966 00:22:08.966 real 0m1.929s 00:22:08.966 user 0m1.530s 00:22:08.966 sys 0m0.282s 00:22:08.966 10:28:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.966 10:28:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:08.966 ************************************ 00:22:08.966 END TEST bdev_hello_world 00:22:08.966 ************************************ 00:22:08.966 10:28:02 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:22:08.966 10:28:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:08.966 10:28:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.966 10:28:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:08.966 ************************************ 00:22:08.966 START TEST bdev_bounds 00:22:08.966 ************************************ 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:08.966 Process bdevio pid: 74102 00:22:08.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74102 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74102' 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74102 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74102 ']' 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.966 10:28:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:08.966 [2024-11-25 10:28:03.080755] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:22:08.966 [2024-11-25 10:28:03.080969] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74102 ] 00:22:08.966 [2024-11-25 10:28:03.268052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:09.225 [2024-11-25 10:28:03.410551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.225 [2024-11-25 10:28:03.410631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.225 [2024-11-25 10:28:03.410638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.791 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.791 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:09.791 10:28:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:10.050 I/O targets: 00:22:10.050 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:10.050 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:10.050 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:10.050 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:22:10.050 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:22:10.050 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:22:10.050 00:22:10.050 00:22:10.050 CUnit - A unit testing framework for C - Version 2.1-3 00:22:10.050 http://cunit.sourceforge.net/ 00:22:10.050 00:22:10.050 00:22:10.050 Suite: bdevio tests on: nvme3n1 00:22:10.050 Test: blockdev write read block ...passed 00:22:10.050 Test: blockdev write zeroes read block ...passed 00:22:10.050 Test: blockdev write zeroes read no split ...passed 00:22:10.050 Test: blockdev write zeroes read split ...passed 00:22:10.050 Test: blockdev write zeroes read split partial ...passed 00:22:10.050 Test: blockdev reset ...passed 00:22:10.050 Test: blockdev write read 8 blocks ...passed 00:22:10.050 Test: blockdev write read size > 128k ...passed 00:22:10.050 Test: blockdev write read invalid size ...passed 00:22:10.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:10.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:10.050 Test: blockdev write read max offset ...passed 00:22:10.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:10.050 Test: blockdev writev readv 8 blocks ...passed 00:22:10.050 Test: blockdev writev readv 30 x 1block ...passed 00:22:10.050 Test: blockdev writev readv block ...passed 00:22:10.050 Test: blockdev writev readv size > 128k ...passed 00:22:10.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:10.050 Test: blockdev comparev and writev ...passed 00:22:10.050 Test: blockdev nvme passthru rw ...passed 00:22:10.050 Test: blockdev nvme passthru vendor specific ...passed 00:22:10.050 Test: blockdev nvme admin passthru ...passed 00:22:10.050 Test: blockdev copy ...passed 00:22:10.050 Suite: bdevio tests on: nvme2n1 00:22:10.050 Test: blockdev write read block ...passed 00:22:10.050 Test: blockdev write zeroes read block ...passed 00:22:10.050 Test: blockdev write zeroes read no split ...passed 00:22:10.050 Test: blockdev write zeroes read split ...passed 00:22:10.050 Test: blockdev write zeroes read split partial ...passed 00:22:10.050 Test: blockdev reset ...passed 00:22:10.050 Test: blockdev write read 8 blocks ...passed 00:22:10.050 Test: blockdev write read size > 128k ...passed 00:22:10.050 Test: blockdev write read invalid size ...passed 00:22:10.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:10.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:10.050 Test: blockdev write read max offset ...passed 00:22:10.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:10.050 Test: blockdev writev readv 8 blocks ...passed 00:22:10.050 Test: blockdev writev readv 30 x 1block ...passed 00:22:10.050 Test: blockdev writev readv block ...passed 00:22:10.050 Test: blockdev writev readv size > 128k ...passed 00:22:10.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:10.050 Test: blockdev comparev and writev ...passed 00:22:10.050 Test: blockdev nvme passthru rw ...passed 00:22:10.050 Test: blockdev nvme passthru vendor specific ...passed 00:22:10.050 Test: blockdev nvme admin passthru ...passed 00:22:10.050 Test: blockdev copy ...passed 00:22:10.050 Suite: bdevio tests on: nvme1n1 00:22:10.050 Test: blockdev write read block ...passed 00:22:10.050 Test: blockdev write zeroes read block ...passed 00:22:10.050 Test: blockdev write zeroes read no split ...passed 00:22:10.050 Test: blockdev write zeroes read split ...passed 00:22:10.050 Test: blockdev write zeroes read split partial ...passed 00:22:10.050 Test: blockdev reset ...passed 00:22:10.050 Test: blockdev write read 8 blocks ...passed 00:22:10.050 Test: blockdev write read size > 128k ...passed 00:22:10.050 Test: blockdev write read invalid size ...passed 00:22:10.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:10.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:10.050 Test: blockdev write read max offset ...passed 00:22:10.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:10.050 Test: blockdev writev readv 8 blocks ...passed 00:22:10.050 Test: blockdev writev readv 30 x 1block ...passed 00:22:10.050 Test: blockdev writev readv block ...passed 00:22:10.050 Test: blockdev writev readv size > 128k ...passed 00:22:10.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:10.050 Test: blockdev comparev and writev ...passed 00:22:10.050 Test: blockdev nvme passthru rw ...passed 00:22:10.050 Test: blockdev nvme passthru vendor specific ...passed 00:22:10.050 Test: blockdev nvme admin passthru ...passed 00:22:10.050 Test: blockdev copy ...passed 00:22:10.050 Suite: bdevio tests on: nvme0n3 00:22:10.050 Test: blockdev write read block ...passed 00:22:10.050 Test: blockdev write zeroes read block ...passed 00:22:10.050 Test: blockdev write zeroes read no split ...passed 00:22:10.309 Test: blockdev write zeroes read split ...passed 00:22:10.309 Test: blockdev write zeroes read split partial ...passed 00:22:10.309 Test: blockdev reset ...passed 00:22:10.309 Test: blockdev write read 8 blocks ...passed 00:22:10.309 Test: blockdev write read size > 128k ...passed 00:22:10.309 Test: blockdev write read invalid size ...passed 00:22:10.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:10.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:10.309 Test: blockdev write read max offset ...passed 00:22:10.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:10.309 Test: blockdev writev readv 8 blocks ...passed 00:22:10.309 Test: blockdev writev readv 30 x 1block ...passed 00:22:10.309 Test: blockdev writev readv block ...passed 00:22:10.309 Test: blockdev writev readv size > 128k ...passed 00:22:10.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:10.309 Test: blockdev comparev and writev ...passed 00:22:10.309 Test: blockdev nvme passthru rw ...passed 00:22:10.309 Test: blockdev nvme passthru vendor specific ...passed 00:22:10.309 Test: blockdev nvme admin passthru ...passed 00:22:10.309 Test: blockdev copy ...passed 00:22:10.309 Suite: bdevio tests on: nvme0n2 00:22:10.309 Test: blockdev write read block ...passed 00:22:10.309 Test: blockdev write zeroes read block ...passed 00:22:10.309 Test: blockdev write zeroes read no split ...passed 00:22:10.309 Test: blockdev write zeroes read split ...passed 00:22:10.309 Test: blockdev write zeroes read split partial ...passed 00:22:10.309 Test: blockdev reset ...passed 00:22:10.309 Test: blockdev write read 8 blocks ...passed 00:22:10.309 Test: blockdev write read size > 128k ...passed 00:22:10.309 Test: blockdev write read invalid size ...passed 00:22:10.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:10.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:10.309 Test: blockdev write read max offset ...passed 00:22:10.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:10.309 Test: blockdev writev readv 8 blocks ...passed 00:22:10.309 Test: blockdev writev readv 30 x 1block ...passed 00:22:10.309 Test: blockdev writev readv block ...passed 00:22:10.309 Test: blockdev writev readv size > 128k ...passed 00:22:10.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:10.309 Test: blockdev comparev and writev ...passed 00:22:10.309 Test: blockdev nvme passthru rw ...passed 00:22:10.309 Test: blockdev nvme passthru vendor specific ...passed 00:22:10.309 Test: blockdev nvme admin passthru ...passed 00:22:10.309 Test: blockdev copy ...passed 00:22:10.309 Suite: bdevio tests on: nvme0n1 00:22:10.309 Test: blockdev write read block ...passed 00:22:10.309 Test: blockdev write zeroes read block ...passed 00:22:10.309 Test: blockdev write zeroes read no split ...passed 00:22:10.309 Test: blockdev write zeroes read split ...passed 00:22:10.309 Test: blockdev write zeroes read split partial ...passed 00:22:10.309 Test: blockdev reset ...passed 00:22:10.309 Test: blockdev write read 8 blocks ...passed 00:22:10.309 Test: blockdev write read size > 128k ...passed 00:22:10.309 Test: blockdev write read invalid size ...passed 00:22:10.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:10.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:10.309 Test: blockdev write read max offset ...passed 00:22:10.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:10.309 Test: blockdev writev readv 8 blocks ...passed 00:22:10.309 Test: blockdev writev readv 30 x 1block ...passed 00:22:10.309 Test: blockdev writev readv block ...passed 00:22:10.309 Test: blockdev writev readv size > 128k ...passed 00:22:10.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:10.309 Test: blockdev comparev and writev ...passed 00:22:10.309 Test: blockdev nvme passthru rw ...passed 00:22:10.309 Test: blockdev nvme passthru vendor specific ...passed 00:22:10.309 Test: blockdev nvme admin passthru ...passed 00:22:10.309 Test: blockdev copy ...passed 00:22:10.309 00:22:10.309 Run Summary: Type Total Ran Passed Failed Inactive 00:22:10.309 suites 6 6 n/a 0 0 00:22:10.309 tests 138 138 138 0 0 00:22:10.309 asserts 780 780 780 0 n/a 00:22:10.309 00:22:10.309 Elapsed time = 1.134 seconds 00:22:10.309 0 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74102 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74102 ']' 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74102 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74102 00:22:10.309 killing process with pid 74102 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74102' 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74102 00:22:10.309 10:28:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74102 00:22:11.685 ************************************ 00:22:11.685 END TEST bdev_bounds 00:22:11.685 ************************************ 00:22:11.685 10:28:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:11.685 00:22:11.685 real 0m2.709s 00:22:11.685 user 0m6.707s 00:22:11.685 sys 0m0.455s 00:22:11.685 10:28:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.685 10:28:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:11.685 10:28:05 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:22:11.685 10:28:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:11.685 10:28:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.685 10:28:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:11.685 ************************************ 00:22:11.685 START TEST bdev_nbd 00:22:11.685 ************************************ 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:11.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74163 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74163 /var/tmp/spdk-nbd.sock 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74163 ']' 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:11.685 10:28:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:11.686 10:28:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.686 10:28:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:11.686 10:28:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.686 10:28:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:11.686 [2024-11-25 10:28:05.836012] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:22:11.686 [2024-11-25 10:28:05.836171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.686 [2024-11-25 10:28:06.008345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.943 [2024-11-25 10:28:06.142979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:12.511 10:28:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:22:12.769 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:12.769 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:12.769 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:12.769 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:12.769 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:12.770 1+0 records in 00:22:12.770 1+0 records out 00:22:12.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430437 s, 9.5 MB/s 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:12.770 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:13.336 1+0 records in 00:22:13.336 1+0 records out 00:22:13.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475101 s, 8.6 MB/s 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:13.336 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:13.595 1+0 records in 00:22:13.595 1+0 records out 00:22:13.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606096 s, 6.8 MB/s 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:13.595 10:28:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:13.853 1+0 records in 00:22:13.853 1+0 records out 00:22:13.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650326 s, 6.3 MB/s 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:13.853 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:14.132 1+0 records in 00:22:14.132 1+0 records out 00:22:14.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046117 s, 8.9 MB/s 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:14.132 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:22:14.390 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:22:14.390 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:22:14.390 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:22:14.390 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:14.391 1+0 records in 00:22:14.391 1+0 records out 00:22:14.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000827753 s, 4.9 MB/s 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:14.391 10:28:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:14.959 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:14.959 { 00:22:14.959 "nbd_device": "/dev/nbd0", 00:22:14.959 "bdev_name": "nvme0n1" 00:22:14.959 }, 00:22:14.959 { 00:22:14.959 "nbd_device": "/dev/nbd1", 00:22:14.959 "bdev_name": "nvme0n2" 00:22:14.959 }, 00:22:14.959 { 00:22:14.959 "nbd_device": "/dev/nbd2", 00:22:14.959 "bdev_name": "nvme0n3" 00:22:14.959 }, 00:22:14.959 { 00:22:14.959 "nbd_device": "/dev/nbd3", 00:22:14.959 "bdev_name": "nvme1n1" 00:22:14.959 }, 00:22:14.959 { 00:22:14.959 "nbd_device": "/dev/nbd4", 00:22:14.959 "bdev_name": "nvme2n1" 00:22:14.959 }, 00:22:14.959 { 00:22:14.959 "nbd_device": "/dev/nbd5", 00:22:14.959 "bdev_name": "nvme3n1" 00:22:14.959 } 00:22:14.959 ]' 00:22:14.959 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:14.960 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:14.960 { 00:22:14.960 "nbd_device": "/dev/nbd0", 00:22:14.960 "bdev_name": "nvme0n1" 00:22:14.960 }, 00:22:14.960 { 00:22:14.960 "nbd_device": "/dev/nbd1", 00:22:14.960 "bdev_name": "nvme0n2" 00:22:14.960 }, 00:22:14.960 { 00:22:14.960 "nbd_device": "/dev/nbd2", 00:22:14.960 "bdev_name": "nvme0n3" 00:22:14.960 }, 00:22:14.960 { 00:22:14.960 "nbd_device": "/dev/nbd3", 00:22:14.960 "bdev_name": "nvme1n1" 00:22:14.960 }, 00:22:14.960 { 00:22:14.960 "nbd_device": "/dev/nbd4", 00:22:14.960 "bdev_name": "nvme2n1" 00:22:14.960 }, 00:22:14.960 { 00:22:14.960 "nbd_device": "/dev/nbd5", 00:22:14.960 "bdev_name": "nvme3n1" 00:22:14.960 } 00:22:14.960 ]' 00:22:14.960 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:14.960 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:22:14.960 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:14.960 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:22:14.960 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:14.960 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:14.960 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:14.960 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.220 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.479 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.738 10:28:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:22:15.996 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:22:15.996 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:22:15.996 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:22:15.996 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.996 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.997 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:22:15.997 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:15.997 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.997 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.997 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.256 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:22:16.515 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:22:16.515 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:22:16.516 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:22:16.516 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.516 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.516 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:22:16.516 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:16.516 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.516 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:16.516 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:16.516 10:28:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:16.775 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:16.775 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:16.775 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:16.775 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:16.775 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:16.775 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:17.034 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:22:17.293 /dev/nbd0 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:17.293 1+0 records in 00:22:17.293 1+0 records out 00:22:17.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615188 s, 6.7 MB/s 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:17.293 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:22:17.551 /dev/nbd1 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:17.551 1+0 records in 00:22:17.551 1+0 records out 00:22:17.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579146 s, 7.1 MB/s 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:17.551 10:28:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:22:17.878 /dev/nbd10 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:17.878 1+0 records in 00:22:17.878 1+0 records out 00:22:17.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756978 s, 5.4 MB/s 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:17.878 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:22:18.158 /dev/nbd11 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:18.158 1+0 records in 00:22:18.158 1+0 records out 00:22:18.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727852 s, 5.6 MB/s 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:18.158 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:22:18.725 /dev/nbd12 00:22:18.725 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:18.726 1+0 records in 00:22:18.726 1+0 records out 00:22:18.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575906 s, 7.1 MB/s 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:18.726 10:28:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:22:18.986 /dev/nbd13 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:18.986 1+0 records in 00:22:18.986 1+0 records out 00:22:18.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664568 s, 6.2 MB/s 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:18.986 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd0", 00:22:19.245 "bdev_name": "nvme0n1" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd1", 00:22:19.245 "bdev_name": "nvme0n2" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd10", 00:22:19.245 "bdev_name": "nvme0n3" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd11", 00:22:19.245 "bdev_name": "nvme1n1" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd12", 00:22:19.245 "bdev_name": "nvme2n1" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd13", 00:22:19.245 "bdev_name": "nvme3n1" 00:22:19.245 } 00:22:19.245 ]' 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd0", 00:22:19.245 "bdev_name": "nvme0n1" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd1", 00:22:19.245 "bdev_name": "nvme0n2" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd10", 00:22:19.245 "bdev_name": "nvme0n3" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd11", 00:22:19.245 "bdev_name": "nvme1n1" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd12", 00:22:19.245 "bdev_name": "nvme2n1" 00:22:19.245 }, 00:22:19.245 { 00:22:19.245 "nbd_device": "/dev/nbd13", 00:22:19.245 "bdev_name": "nvme3n1" 00:22:19.245 } 00:22:19.245 ]' 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:22:19.245 /dev/nbd1 00:22:19.245 /dev/nbd10 00:22:19.245 /dev/nbd11 00:22:19.245 /dev/nbd12 00:22:19.245 /dev/nbd13' 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:22:19.245 /dev/nbd1 00:22:19.245 /dev/nbd10 00:22:19.245 /dev/nbd11 00:22:19.245 /dev/nbd12 00:22:19.245 /dev/nbd13' 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:19.245 256+0 records in 00:22:19.245 256+0 records out 00:22:19.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00902099 s, 116 MB/s 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:19.245 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:19.504 256+0 records in 00:22:19.504 256+0 records out 00:22:19.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117959 s, 8.9 MB/s 00:22:19.504 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:19.504 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:22:19.504 256+0 records in 00:22:19.504 256+0 records out 00:22:19.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122276 s, 8.6 MB/s 00:22:19.504 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:19.504 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:22:19.762 256+0 records in 00:22:19.762 256+0 records out 00:22:19.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128969 s, 8.1 MB/s 00:22:19.762 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:19.762 10:28:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:22:19.762 256+0 records in 00:22:19.762 256+0 records out 00:22:19.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124968 s, 8.4 MB/s 00:22:19.762 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:19.762 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:22:20.021 256+0 records in 00:22:20.021 256+0 records out 00:22:20.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143807 s, 7.3 MB/s 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:22:20.021 256+0 records in 00:22:20.021 256+0 records out 00:22:20.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165554 s, 6.3 MB/s 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:20.021 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:20.281 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:20.540 10:28:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:20.802 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.062 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.321 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.889 10:28:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:21.889 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:22.456 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:22.714 malloc_lvol_verify 00:22:22.714 10:28:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:22.972 c5ef3174-1d06-4c51-bc58-6b9c2266cfa9 00:22:22.972 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:23.231 6705bcd2-a136-4f15-a135-de9d1db299ad 00:22:23.231 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:23.803 /dev/nbd0 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:23.803 mke2fs 1.47.0 (5-Feb-2023) 00:22:23.803 Discarding device blocks: 0/4096 done 00:22:23.803 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:23.803 00:22:23.803 Allocating group tables: 0/1 done 00:22:23.803 Writing inode tables: 0/1 done 00:22:23.803 Creating journal (1024 blocks): done 00:22:23.803 Writing superblocks and filesystem accounting information: 0/1 done 00:22:23.803 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:23.803 10:28:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74163 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74163 ']' 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74163 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74163 00:22:24.062 killing process with pid 74163 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74163' 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74163 00:22:24.062 10:28:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74163 00:22:25.439 ************************************ 00:22:25.439 END TEST bdev_nbd 00:22:25.439 ************************************ 00:22:25.439 10:28:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:25.439 00:22:25.439 real 0m13.744s 00:22:25.439 user 0m19.803s 00:22:25.439 sys 0m4.395s 00:22:25.439 10:28:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.439 10:28:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:25.439 10:28:19 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:22:25.439 10:28:19 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:22:25.439 10:28:19 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:22:25.439 10:28:19 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:22:25.439 10:28:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:25.439 10:28:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.439 10:28:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:25.439 ************************************ 00:22:25.439 START TEST bdev_fio 00:22:25.439 ************************************ 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:25.439 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:22:25.439 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:25.440 ************************************ 00:22:25.440 START TEST bdev_fio_rw_verify 00:22:25.440 ************************************ 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:25.440 10:28:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:25.699 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:25.699 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:25.699 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:25.699 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:25.699 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:25.699 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:25.699 fio-3.35 00:22:25.699 Starting 6 threads 00:22:37.930 00:22:37.930 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74604: Mon Nov 25 10:28:30 2024 00:22:37.930 read: IOPS=25.2k, BW=98.4MiB/s (103MB/s)(984MiB/10001msec) 00:22:37.930 slat (usec): min=3, max=3409, avg= 8.46, stdev=10.17 00:22:37.930 clat (usec): min=116, max=10299, avg=701.44, stdev=297.02 00:22:37.930 lat (usec): min=122, max=10303, avg=709.90, stdev=297.91 00:22:37.930 clat percentiles (usec): 00:22:37.930 | 50.000th=[ 693], 99.000th=[ 1500], 99.900th=[ 2769], 99.990th=[ 4424], 00:22:37.930 | 99.999th=[ 9765] 00:22:37.930 write: IOPS=25.4k, BW=99.3MiB/s (104MB/s)(993MiB/10001msec); 0 zone resets 00:22:37.930 slat (usec): min=13, max=1524, avg=34.46, stdev=42.90 00:22:37.930 clat (usec): min=89, max=4690, avg=852.11, stdev=314.31 00:22:37.930 lat (usec): min=121, max=4741, avg=886.58, stdev=319.37 00:22:37.930 clat percentiles (usec): 00:22:37.930 | 50.000th=[ 840], 99.000th=[ 1778], 99.900th=[ 2409], 99.990th=[ 3490], 00:22:37.930 | 99.999th=[ 4555] 00:22:37.930 bw ( KiB/s): min=91125, max=121488, per=100.00%, avg=101763.47, stdev=1481.90, samples=114 00:22:37.930 iops : min=22780, max=30372, avg=25440.21, stdev=370.52, samples=114 00:22:37.930 lat (usec) : 100=0.01%, 250=2.01%, 500=17.15%, 750=28.87%, 1000=31.23% 00:22:37.930 lat (msec) : 2=20.43%, 4=0.29%, 10=0.01%, 20=0.01% 00:22:37.930 cpu : usr=54.98%, sys=29.01%, ctx=7185, majf=0, minf=22018 00:22:37.930 IO depths : 1=11.6%, 2=24.0%, 4=51.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:37.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.930 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.930 issued rwts: total=251889,254289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:37.930 00:22:37.930 Run status group 0 (all jobs): 00:22:37.930 READ: bw=98.4MiB/s (103MB/s), 98.4MiB/s-98.4MiB/s (103MB/s-103MB/s), io=984MiB (1032MB), run=10001-10001msec 00:22:37.930 WRITE: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=993MiB (1042MB), run=10001-10001msec 00:22:38.189 ----------------------------------------------------- 00:22:38.189 Suppressions used: 00:22:38.189 count bytes template 00:22:38.189 6 48 /usr/src/fio/parse.c 00:22:38.189 2233 214368 /usr/src/fio/iolog.c 00:22:38.189 1 8 libtcmalloc_minimal.so 00:22:38.189 1 904 libcrypto.so 00:22:38.189 ----------------------------------------------------- 00:22:38.189 00:22:38.189 00:22:38.189 real 0m12.815s 00:22:38.189 user 0m35.186s 00:22:38.189 sys 0m17.912s 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:38.189 ************************************ 00:22:38.189 END TEST bdev_fio_rw_verify 00:22:38.189 ************************************ 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:22:38.189 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:38.190 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "132bf61a-785d-4c02-ba34-9a2f3b84aa32"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "132bf61a-785d-4c02-ba34-9a2f3b84aa32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ce1faeab-f347-4d0a-8469-d54fa686b344"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce1faeab-f347-4d0a-8469-d54fa686b344",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "92e30a1a-07db-4d5a-a069-883953dcaba6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92e30a1a-07db-4d5a-a069-883953dcaba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "11a745ff-ec93-40e4-a319-b5794c6e729b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "11a745ff-ec93-40e4-a319-b5794c6e729b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0d518fb6-16e8-4b22-b2e3-505b5bcb0c5a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0d518fb6-16e8-4b22-b2e3-505b5bcb0c5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2b2102af-49e9-4e7c-91bf-13e89b701d2c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2b2102af-49e9-4e7c-91bf-13e89b701d2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:38.449 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:38.449 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:38.449 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:38.449 /home/vagrant/spdk_repo/spdk 00:22:38.449 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:38.449 10:28:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:38.449 00:22:38.449 real 0m13.015s 00:22:38.449 user 0m35.302s 00:22:38.449 sys 0m17.997s 00:22:38.449 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.449 ************************************ 00:22:38.449 END TEST bdev_fio 00:22:38.449 ************************************ 00:22:38.449 10:28:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:38.449 10:28:32 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:38.449 10:28:32 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:38.449 10:28:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:38.449 10:28:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.449 10:28:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:38.449 ************************************ 00:22:38.449 START TEST bdev_verify 00:22:38.449 ************************************ 00:22:38.449 10:28:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:38.449 [2024-11-25 10:28:32.716529] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:22:38.449 [2024-11-25 10:28:32.716716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74780 ] 00:22:38.708 [2024-11-25 10:28:32.907534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:38.966 [2024-11-25 10:28:33.072197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.966 [2024-11-25 10:28:33.072200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.532 Running I/O for 5 seconds... 00:22:41.840 24160.00 IOPS, 94.38 MiB/s [2024-11-25T10:28:37.107Z] 23456.00 IOPS, 91.62 MiB/s [2024-11-25T10:28:38.104Z] 23253.33 IOPS, 90.83 MiB/s [2024-11-25T10:28:39.047Z] 22736.00 IOPS, 88.81 MiB/s [2024-11-25T10:28:39.047Z] 22252.80 IOPS, 86.93 MiB/s 00:22:44.714 Latency(us) 00:22:44.714 [2024-11-25T10:28:39.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.714 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:44.714 Verification LBA range: start 0x0 length 0x80000 00:22:44.714 nvme0n1 : 5.08 1612.84 6.30 0.00 0.00 79221.46 16562.73 78166.57 00:22:44.714 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:44.714 Verification LBA range: start 0x80000 length 0x80000 00:22:44.714 nvme0n1 : 5.04 1573.60 6.15 0.00 0.00 81204.22 14715.81 72923.69 00:22:44.714 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:44.714 Verification LBA range: start 0x0 length 0x80000 00:22:44.714 nvme0n2 : 5.08 1612.19 6.30 0.00 0.00 79102.81 20733.21 71493.82 00:22:44.714 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:44.714 Verification LBA range: start 0x80000 length 0x80000 00:22:44.714 nvme0n2 : 5.03 1578.59 6.17 0.00 0.00 80814.95 10128.29 80073.08 00:22:44.714 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:44.714 Verification LBA range: start 0x0 length 0x80000 00:22:44.714 nvme0n3 : 5.08 1611.54 6.30 0.00 0.00 78971.25 20733.21 68157.44 00:22:44.714 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:44.714 Verification LBA range: start 0x80000 length 0x80000 00:22:44.715 nvme0n3 : 5.05 1572.95 6.14 0.00 0.00 80979.82 22401.40 61961.31 00:22:44.715 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:44.715 Verification LBA range: start 0x0 length 0x20000 00:22:44.715 nvme1n1 : 5.09 1610.83 6.29 0.00 0.00 78820.54 19184.17 77213.32 00:22:44.715 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:44.715 Verification LBA range: start 0x20000 length 0x20000 00:22:44.715 nvme1n1 : 5.05 1572.31 6.14 0.00 0.00 80883.58 17754.30 71017.19 00:22:44.715 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:44.715 Verification LBA range: start 0x0 length 0xa0000 00:22:44.715 nvme2n1 : 5.09 1610.16 6.29 0.00 0.00 78703.40 14894.55 82932.83 00:22:44.715 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:44.715 Verification LBA range: start 0xa0000 length 0xa0000 00:22:44.715 nvme2n1 : 5.09 1583.91 6.19 0.00 0.00 80173.40 4885.41 80073.08 00:22:44.715 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:44.715 Verification LBA range: start 0x0 length 0xbd0bd 00:22:44.715 nvme3n1 : 5.09 3078.23 12.02 0.00 0.00 41021.00 4081.11 65297.69 00:22:44.715 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:44.715 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:22:44.715 nvme3n1 : 5.09 2962.58 11.57 0.00 0.00 42703.67 3961.95 67204.19 00:22:44.715 [2024-11-25T10:28:39.048Z] =================================================================================================================== 00:22:44.715 [2024-11-25T10:28:39.048Z] Total : 21979.73 85.86 0.00 0.00 69387.11 3961.95 82932.83 00:22:46.089 00:22:46.089 real 0m7.427s 00:22:46.089 user 0m11.646s 00:22:46.089 sys 0m1.917s 00:22:46.089 10:28:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.089 10:28:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:46.089 ************************************ 00:22:46.089 END TEST bdev_verify 00:22:46.089 ************************************ 00:22:46.089 10:28:40 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:46.089 10:28:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:46.089 10:28:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.089 10:28:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:46.089 ************************************ 00:22:46.089 START TEST bdev_verify_big_io 00:22:46.089 ************************************ 00:22:46.089 10:28:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:46.089 [2024-11-25 10:28:40.190722] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:22:46.089 [2024-11-25 10:28:40.190942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74890 ] 00:22:46.089 [2024-11-25 10:28:40.380858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:46.348 [2024-11-25 10:28:40.561651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.348 [2024-11-25 10:28:40.561658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.977 Running I/O for 5 seconds... 00:22:51.216 464.00 IOPS, 29.00 MiB/s [2024-11-25T10:28:47.455Z] 2432.00 IOPS, 152.00 MiB/s [2024-11-25T10:28:47.455Z] 2261.33 IOPS, 141.33 MiB/s 00:22:53.122 Latency(us) 00:22:53.122 [2024-11-25T10:28:47.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.122 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x0 length 0x8000 00:22:53.122 nvme0n1 : 5.91 140.74 8.80 0.00 0.00 875057.41 30504.03 1121023.07 00:22:53.122 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x8000 length 0x8000 00:22:53.122 nvme0n1 : 5.93 107.95 6.75 0.00 0.00 1138777.37 104380.97 1060015.01 00:22:53.122 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x0 length 0x8000 00:22:53.122 nvme0n2 : 5.57 160.74 10.05 0.00 0.00 741997.98 52190.49 823608.79 00:22:53.122 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x8000 length 0x8000 00:22:53.122 nvme0n2 : 5.93 128.12 8.01 0.00 0.00 938601.26 102951.10 861738.82 00:22:53.122 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x0 length 0x8000 00:22:53.122 nvme0n3 : 5.92 137.94 8.62 0.00 0.00 830866.23 96754.97 1281169.22 00:22:53.122 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x8000 length 0x8000 00:22:53.122 nvme0n3 : 5.81 92.23 5.76 0.00 0.00 1278110.28 101997.85 2821622.69 00:22:53.122 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x0 length 0x2000 00:22:53.122 nvme1n1 : 5.98 139.22 8.70 0.00 0.00 809295.99 40513.16 922746.88 00:22:53.122 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x2000 length 0x2000 00:22:53.122 nvme1n1 : 5.91 127.24 7.95 0.00 0.00 887415.75 131548.63 1792111.71 00:22:53.122 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x0 length 0xa000 00:22:53.122 nvme2n1 : 5.98 144.41 9.03 0.00 0.00 762407.65 20018.27 1410811.35 00:22:53.122 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0xa000 length 0xa000 00:22:53.122 nvme2n1 : 5.91 143.37 8.96 0.00 0.00 776739.23 23473.80 1136275.08 00:22:53.122 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0x0 length 0xbd0b 00:22:53.122 nvme3n1 : 5.98 176.57 11.04 0.00 0.00 608590.99 6464.23 1464193.40 00:22:53.122 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:53.122 Verification LBA range: start 0xbd0b length 0xbd0b 00:22:53.122 nvme3n1 : 5.94 161.59 10.10 0.00 0.00 672755.65 9889.98 869364.83 00:22:53.122 [2024-11-25T10:28:47.455Z] =================================================================================================================== 00:22:53.122 [2024-11-25T10:28:47.455Z] Total : 1660.12 103.76 0.00 0.00 831767.76 6464.23 2821622.69 00:22:54.500 00:22:54.500 real 0m8.582s 00:22:54.500 user 0m15.390s 00:22:54.500 sys 0m0.702s 00:22:54.500 10:28:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.500 10:28:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:54.500 ************************************ 00:22:54.500 END TEST bdev_verify_big_io 00:22:54.500 ************************************ 00:22:54.500 10:28:48 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:54.500 10:28:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:54.500 10:28:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.500 10:28:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:54.500 ************************************ 00:22:54.500 START TEST bdev_write_zeroes 00:22:54.500 ************************************ 00:22:54.500 10:28:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:54.500 [2024-11-25 10:28:48.820648] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:22:54.500 [2024-11-25 10:28:48.820836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75003 ] 00:22:54.764 [2024-11-25 10:28:49.007177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.022 [2024-11-25 10:28:49.220291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.589 Running I/O for 1 seconds... 00:22:56.524 60145.00 IOPS, 234.94 MiB/s 00:22:56.524 Latency(us) 00:22:56.524 [2024-11-25T10:28:50.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.524 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:56.524 nvme0n1 : 1.03 8587.53 33.55 0.00 0.00 14889.72 8460.10 31695.59 00:22:56.524 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:56.524 nvme0n2 : 1.03 8574.07 33.49 0.00 0.00 14898.62 8638.84 31933.91 00:22:56.524 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:56.524 nvme0n3 : 1.03 8561.04 33.44 0.00 0.00 14907.71 8638.84 32172.22 00:22:56.524 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:56.524 nvme1n1 : 1.03 8548.02 33.39 0.00 0.00 14916.53 8638.84 32172.22 00:22:56.524 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:56.524 nvme2n1 : 1.03 8535.25 33.34 0.00 0.00 14925.79 8638.84 32410.53 00:22:56.524 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:56.524 nvme3n1 : 1.04 16178.09 63.20 0.00 0.00 7841.81 3068.28 20494.89 00:22:56.524 [2024-11-25T10:28:50.857Z] =================================================================================================================== 00:22:56.524 [2024-11-25T10:28:50.857Z] Total : 58984.00 230.41 0.00 0.00 12953.60 3068.28 32410.53 00:22:57.901 00:22:57.901 real 0m3.259s 00:22:57.901 user 0m2.395s 00:22:57.901 sys 0m0.690s 00:22:57.901 10:28:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.901 10:28:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:57.901 ************************************ 00:22:57.901 END TEST bdev_write_zeroes 00:22:57.901 ************************************ 00:22:57.901 10:28:52 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:57.901 10:28:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:57.901 10:28:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.901 10:28:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:57.901 ************************************ 00:22:57.901 START TEST bdev_json_nonenclosed 00:22:57.901 ************************************ 00:22:57.901 10:28:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:57.901 [2024-11-25 10:28:52.134366] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:22:57.901 [2024-11-25 10:28:52.134519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75057 ] 00:22:58.160 [2024-11-25 10:28:52.312200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.160 [2024-11-25 10:28:52.465675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.160 [2024-11-25 10:28:52.465825] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:58.160 [2024-11-25 10:28:52.465859] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:58.160 [2024-11-25 10:28:52.465873] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:58.777 00:22:58.777 real 0m0.751s 00:22:58.777 user 0m0.492s 00:22:58.777 sys 0m0.152s 00:22:58.777 10:28:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.777 10:28:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:58.777 ************************************ 00:22:58.777 END TEST bdev_json_nonenclosed 00:22:58.777 ************************************ 00:22:58.777 10:28:52 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:58.777 10:28:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:58.777 10:28:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.777 10:28:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:58.777 ************************************ 00:22:58.777 START TEST bdev_json_nonarray 00:22:58.777 ************************************ 00:22:58.777 10:28:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:58.777 [2024-11-25 10:28:52.966458] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:22:58.777 [2024-11-25 10:28:52.966655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75088 ] 00:22:59.123 [2024-11-25 10:28:53.155616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.123 [2024-11-25 10:28:53.341877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.123 [2024-11-25 10:28:53.342017] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:59.123 [2024-11-25 10:28:53.342052] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:59.123 [2024-11-25 10:28:53.342070] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:59.381 00:22:59.381 real 0m0.813s 00:22:59.381 user 0m0.561s 00:22:59.381 sys 0m0.144s 00:22:59.381 10:28:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.381 10:28:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:59.381 ************************************ 00:22:59.381 END TEST bdev_json_nonarray 00:22:59.381 ************************************ 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:22:59.381 10:28:53 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:59.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:00.878 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:00.878 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:00.878 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:01.136 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:01.136 00:23:01.136 real 0m59.491s 00:23:01.136 user 1m40.254s 00:23:01.136 sys 0m29.794s 00:23:01.136 10:28:55 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.136 10:28:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:01.136 ************************************ 00:23:01.136 END TEST blockdev_xnvme 00:23:01.136 ************************************ 00:23:01.136 10:28:55 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:01.136 10:28:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:01.136 10:28:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.136 10:28:55 -- common/autotest_common.sh@10 -- # set +x 00:23:01.136 ************************************ 00:23:01.136 START TEST ublk 00:23:01.136 ************************************ 00:23:01.136 10:28:55 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:01.136 * Looking for test storage... 00:23:01.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:23:01.136 10:28:55 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:01.136 10:28:55 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:01.136 10:28:55 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:23:01.394 10:28:55 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:01.394 10:28:55 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.394 10:28:55 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.394 10:28:55 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.394 10:28:55 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.394 10:28:55 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.394 10:28:55 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.394 10:28:55 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.394 10:28:55 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.394 10:28:55 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.394 10:28:55 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.394 10:28:55 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.394 10:28:55 ublk -- scripts/common.sh@344 -- # case "$op" in 00:23:01.394 10:28:55 ublk -- scripts/common.sh@345 -- # : 1 00:23:01.394 10:28:55 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.394 10:28:55 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.394 10:28:55 ublk -- scripts/common.sh@365 -- # decimal 1 00:23:01.394 10:28:55 ublk -- scripts/common.sh@353 -- # local d=1 00:23:01.394 10:28:55 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.394 10:28:55 ublk -- scripts/common.sh@355 -- # echo 1 00:23:01.394 10:28:55 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.394 10:28:55 ublk -- scripts/common.sh@366 -- # decimal 2 00:23:01.394 10:28:55 ublk -- scripts/common.sh@353 -- # local d=2 00:23:01.394 10:28:55 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.394 10:28:55 ublk -- scripts/common.sh@355 -- # echo 2 00:23:01.394 10:28:55 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.394 10:28:55 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.394 10:28:55 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.394 10:28:55 ublk -- scripts/common.sh@368 -- # return 0 00:23:01.394 10:28:55 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.394 10:28:55 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:01.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.394 --rc genhtml_branch_coverage=1 00:23:01.394 --rc genhtml_function_coverage=1 00:23:01.394 --rc genhtml_legend=1 00:23:01.394 --rc geninfo_all_blocks=1 00:23:01.394 --rc geninfo_unexecuted_blocks=1 00:23:01.394 00:23:01.394 ' 00:23:01.394 10:28:55 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:01.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.394 --rc genhtml_branch_coverage=1 00:23:01.394 --rc genhtml_function_coverage=1 00:23:01.394 --rc genhtml_legend=1 00:23:01.394 --rc geninfo_all_blocks=1 00:23:01.394 --rc geninfo_unexecuted_blocks=1 00:23:01.394 00:23:01.394 ' 00:23:01.394 10:28:55 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:01.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.394 --rc genhtml_branch_coverage=1 00:23:01.394 --rc genhtml_function_coverage=1 00:23:01.394 --rc genhtml_legend=1 00:23:01.394 --rc geninfo_all_blocks=1 00:23:01.394 --rc geninfo_unexecuted_blocks=1 00:23:01.394 00:23:01.394 ' 00:23:01.394 10:28:55 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:01.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.394 --rc genhtml_branch_coverage=1 00:23:01.394 --rc genhtml_function_coverage=1 00:23:01.394 --rc genhtml_legend=1 00:23:01.394 --rc geninfo_all_blocks=1 00:23:01.394 --rc geninfo_unexecuted_blocks=1 00:23:01.394 00:23:01.394 ' 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:23:01.394 10:28:55 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:23:01.394 10:28:55 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:23:01.394 10:28:55 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:23:01.394 10:28:55 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:23:01.394 10:28:55 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:23:01.394 10:28:55 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:23:01.394 10:28:55 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:23:01.394 10:28:55 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:23:01.394 10:28:55 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:23:01.394 10:28:55 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:01.394 10:28:55 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.394 10:28:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:01.394 ************************************ 00:23:01.394 START TEST test_save_ublk_config 00:23:01.394 ************************************ 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75378 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:23:01.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75378 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75378 ']' 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.394 10:28:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:01.394 [2024-11-25 10:28:55.648535] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:23:01.394 [2024-11-25 10:28:55.649433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75378 ] 00:23:01.651 [2024-11-25 10:28:55.838911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.651 [2024-11-25 10:28:55.972742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.585 10:28:56 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.585 10:28:56 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:02.585 10:28:56 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:23:02.585 10:28:56 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:23:02.585 10:28:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.585 10:28:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:02.585 [2024-11-25 10:28:56.860803] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:02.585 [2024-11-25 10:28:56.861973] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:02.844 malloc0 00:23:02.844 [2024-11-25 10:28:56.948964] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:02.844 [2024-11-25 10:28:56.949074] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:02.844 [2024-11-25 10:28:56.949091] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:02.844 [2024-11-25 10:28:56.949101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:02.844 [2024-11-25 10:28:56.956996] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:02.844 [2024-11-25 10:28:56.957024] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:02.844 [2024-11-25 10:28:56.964815] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:02.844 [2024-11-25 10:28:56.964938] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:02.844 [2024-11-25 10:28:56.981816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:02.844 0 00:23:02.844 10:28:56 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.844 10:28:56 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:23:02.844 10:28:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.844 10:28:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:03.102 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.102 10:28:57 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:23:03.102 "subsystems": [ 00:23:03.102 { 00:23:03.102 "subsystem": "fsdev", 00:23:03.102 "config": [ 00:23:03.102 { 00:23:03.102 "method": "fsdev_set_opts", 00:23:03.102 "params": { 00:23:03.103 "fsdev_io_pool_size": 65535, 00:23:03.103 "fsdev_io_cache_size": 256 00:23:03.103 } 00:23:03.103 } 00:23:03.103 ] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "keyring", 00:23:03.103 "config": [] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "iobuf", 00:23:03.103 "config": [ 00:23:03.103 { 00:23:03.103 "method": "iobuf_set_options", 00:23:03.103 "params": { 00:23:03.103 "small_pool_count": 8192, 00:23:03.103 "large_pool_count": 1024, 00:23:03.103 "small_bufsize": 8192, 00:23:03.103 "large_bufsize": 135168, 00:23:03.103 "enable_numa": false 00:23:03.103 } 00:23:03.103 } 00:23:03.103 ] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "sock", 00:23:03.103 "config": [ 00:23:03.103 { 00:23:03.103 "method": "sock_set_default_impl", 00:23:03.103 "params": { 00:23:03.103 "impl_name": "posix" 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "sock_impl_set_options", 00:23:03.103 "params": { 00:23:03.103 "impl_name": "ssl", 00:23:03.103 "recv_buf_size": 4096, 00:23:03.103 "send_buf_size": 4096, 00:23:03.103 "enable_recv_pipe": true, 00:23:03.103 "enable_quickack": false, 00:23:03.103 "enable_placement_id": 0, 00:23:03.103 "enable_zerocopy_send_server": true, 00:23:03.103 "enable_zerocopy_send_client": false, 00:23:03.103 "zerocopy_threshold": 0, 00:23:03.103 "tls_version": 0, 00:23:03.103 "enable_ktls": false 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "sock_impl_set_options", 00:23:03.103 "params": { 00:23:03.103 "impl_name": "posix", 00:23:03.103 "recv_buf_size": 2097152, 00:23:03.103 "send_buf_size": 2097152, 00:23:03.103 "enable_recv_pipe": true, 00:23:03.103 "enable_quickack": false, 00:23:03.103 "enable_placement_id": 0, 00:23:03.103 "enable_zerocopy_send_server": true, 00:23:03.103 "enable_zerocopy_send_client": false, 00:23:03.103 "zerocopy_threshold": 0, 00:23:03.103 "tls_version": 0, 00:23:03.103 "enable_ktls": false 00:23:03.103 } 00:23:03.103 } 00:23:03.103 ] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "vmd", 00:23:03.103 "config": [] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "accel", 00:23:03.103 "config": [ 00:23:03.103 { 00:23:03.103 "method": "accel_set_options", 00:23:03.103 "params": { 00:23:03.103 "small_cache_size": 128, 00:23:03.103 "large_cache_size": 16, 00:23:03.103 "task_count": 2048, 00:23:03.103 "sequence_count": 2048, 00:23:03.103 "buf_count": 2048 00:23:03.103 } 00:23:03.103 } 00:23:03.103 ] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "bdev", 00:23:03.103 "config": [ 00:23:03.103 { 00:23:03.103 "method": "bdev_set_options", 00:23:03.103 "params": { 00:23:03.103 "bdev_io_pool_size": 65535, 00:23:03.103 "bdev_io_cache_size": 256, 00:23:03.103 "bdev_auto_examine": true, 00:23:03.103 "iobuf_small_cache_size": 128, 00:23:03.103 "iobuf_large_cache_size": 16 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "bdev_raid_set_options", 00:23:03.103 "params": { 00:23:03.103 "process_window_size_kb": 1024, 00:23:03.103 "process_max_bandwidth_mb_sec": 0 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "bdev_iscsi_set_options", 00:23:03.103 "params": { 00:23:03.103 "timeout_sec": 30 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "bdev_nvme_set_options", 00:23:03.103 "params": { 00:23:03.103 "action_on_timeout": "none", 00:23:03.103 "timeout_us": 0, 00:23:03.103 "timeout_admin_us": 0, 00:23:03.103 "keep_alive_timeout_ms": 10000, 00:23:03.103 "arbitration_burst": 0, 00:23:03.103 "low_priority_weight": 0, 00:23:03.103 "medium_priority_weight": 0, 00:23:03.103 "high_priority_weight": 0, 00:23:03.103 "nvme_adminq_poll_period_us": 10000, 00:23:03.103 "nvme_ioq_poll_period_us": 0, 00:23:03.103 "io_queue_requests": 0, 00:23:03.103 "delay_cmd_submit": true, 00:23:03.103 "transport_retry_count": 4, 00:23:03.103 "bdev_retry_count": 3, 00:23:03.103 "transport_ack_timeout": 0, 00:23:03.103 "ctrlr_loss_timeout_sec": 0, 00:23:03.103 "reconnect_delay_sec": 0, 00:23:03.103 "fast_io_fail_timeout_sec": 0, 00:23:03.103 "disable_auto_failback": false, 00:23:03.103 "generate_uuids": false, 00:23:03.103 "transport_tos": 0, 00:23:03.103 "nvme_error_stat": false, 00:23:03.103 "rdma_srq_size": 0, 00:23:03.103 "io_path_stat": false, 00:23:03.103 "allow_accel_sequence": false, 00:23:03.103 "rdma_max_cq_size": 0, 00:23:03.103 "rdma_cm_event_timeout_ms": 0, 00:23:03.103 "dhchap_digests": [ 00:23:03.103 "sha256", 00:23:03.103 "sha384", 00:23:03.103 "sha512" 00:23:03.103 ], 00:23:03.103 "dhchap_dhgroups": [ 00:23:03.103 "null", 00:23:03.103 "ffdhe2048", 00:23:03.103 "ffdhe3072", 00:23:03.103 "ffdhe4096", 00:23:03.103 "ffdhe6144", 00:23:03.103 "ffdhe8192" 00:23:03.103 ] 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "bdev_nvme_set_hotplug", 00:23:03.103 "params": { 00:23:03.103 "period_us": 100000, 00:23:03.103 "enable": false 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "bdev_malloc_create", 00:23:03.103 "params": { 00:23:03.103 "name": "malloc0", 00:23:03.103 "num_blocks": 8192, 00:23:03.103 "block_size": 4096, 00:23:03.103 "physical_block_size": 4096, 00:23:03.103 "uuid": "b35df6fb-0cd7-4f51-b460-6f9c11428690", 00:23:03.103 "optimal_io_boundary": 0, 00:23:03.103 "md_size": 0, 00:23:03.103 "dif_type": 0, 00:23:03.103 "dif_is_head_of_md": false, 00:23:03.103 "dif_pi_format": 0 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "bdev_wait_for_examine" 00:23:03.103 } 00:23:03.103 ] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "scsi", 00:23:03.103 "config": null 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "scheduler", 00:23:03.103 "config": [ 00:23:03.103 { 00:23:03.103 "method": "framework_set_scheduler", 00:23:03.103 "params": { 00:23:03.103 "name": "static" 00:23:03.103 } 00:23:03.103 } 00:23:03.103 ] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "vhost_scsi", 00:23:03.103 "config": [] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "vhost_blk", 00:23:03.103 "config": [] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "ublk", 00:23:03.103 "config": [ 00:23:03.103 { 00:23:03.103 "method": "ublk_create_target", 00:23:03.103 "params": { 00:23:03.103 "cpumask": "1" 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "ublk_start_disk", 00:23:03.103 "params": { 00:23:03.103 "bdev_name": "malloc0", 00:23:03.103 "ublk_id": 0, 00:23:03.103 "num_queues": 1, 00:23:03.103 "queue_depth": 128 00:23:03.103 } 00:23:03.103 } 00:23:03.103 ] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "nbd", 00:23:03.103 "config": [] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "nvmf", 00:23:03.103 "config": [ 00:23:03.103 { 00:23:03.103 "method": "nvmf_set_config", 00:23:03.103 "params": { 00:23:03.103 "discovery_filter": "match_any", 00:23:03.103 "admin_cmd_passthru": { 00:23:03.103 "identify_ctrlr": false 00:23:03.103 }, 00:23:03.103 "dhchap_digests": [ 00:23:03.103 "sha256", 00:23:03.103 "sha384", 00:23:03.103 "sha512" 00:23:03.103 ], 00:23:03.103 "dhchap_dhgroups": [ 00:23:03.103 "null", 00:23:03.103 "ffdhe2048", 00:23:03.103 "ffdhe3072", 00:23:03.103 "ffdhe4096", 00:23:03.103 "ffdhe6144", 00:23:03.103 "ffdhe8192" 00:23:03.103 ] 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "nvmf_set_max_subsystems", 00:23:03.103 "params": { 00:23:03.103 "max_subsystems": 1024 00:23:03.103 } 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "method": "nvmf_set_crdt", 00:23:03.103 "params": { 00:23:03.103 "crdt1": 0, 00:23:03.103 "crdt2": 0, 00:23:03.103 "crdt3": 0 00:23:03.103 } 00:23:03.103 } 00:23:03.103 ] 00:23:03.103 }, 00:23:03.103 { 00:23:03.103 "subsystem": "iscsi", 00:23:03.103 "config": [ 00:23:03.103 { 00:23:03.103 "method": "iscsi_set_options", 00:23:03.103 "params": { 00:23:03.103 "node_base": "iqn.2016-06.io.spdk", 00:23:03.103 "max_sessions": 128, 00:23:03.103 "max_connections_per_session": 2, 00:23:03.103 "max_queue_depth": 64, 00:23:03.103 "default_time2wait": 2, 00:23:03.103 "default_time2retain": 20, 00:23:03.103 "first_burst_length": 8192, 00:23:03.103 "immediate_data": true, 00:23:03.103 "allow_duplicated_isid": false, 00:23:03.103 "error_recovery_level": 0, 00:23:03.103 "nop_timeout": 60, 00:23:03.103 "nop_in_interval": 30, 00:23:03.103 "disable_chap": false, 00:23:03.103 "require_chap": false, 00:23:03.104 "mutual_chap": false, 00:23:03.104 "chap_group": 0, 00:23:03.104 "max_large_datain_per_connection": 64, 00:23:03.104 "max_r2t_per_connection": 4, 00:23:03.104 "pdu_pool_size": 36864, 00:23:03.104 "immediate_data_pool_size": 16384, 00:23:03.104 "data_out_pool_size": 2048 00:23:03.104 } 00:23:03.104 } 00:23:03.104 ] 00:23:03.104 } 00:23:03.104 ] 00:23:03.104 }' 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75378 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75378 ']' 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75378 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75378 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.104 killing process with pid 75378 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75378' 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75378 00:23:03.104 10:28:57 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75378 00:23:04.482 [2024-11-25 10:28:58.788063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:04.750 [2024-11-25 10:28:58.816835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:04.750 [2024-11-25 10:28:58.817077] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:04.750 [2024-11-25 10:28:58.825839] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:04.750 [2024-11-25 10:28:58.825934] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:04.750 [2024-11-25 10:28:58.825971] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:04.750 [2024-11-25 10:28:58.826018] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:04.750 [2024-11-25 10:28:58.826239] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:06.650 10:29:00 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:23:06.650 10:29:00 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75448 00:23:06.650 10:29:00 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75448 00:23:06.650 10:29:00 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75448 ']' 00:23:06.650 10:29:00 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.650 10:29:00 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:23:06.650 "subsystems": [ 00:23:06.650 { 00:23:06.650 "subsystem": "fsdev", 00:23:06.650 "config": [ 00:23:06.650 { 00:23:06.650 "method": "fsdev_set_opts", 00:23:06.650 "params": { 00:23:06.650 "fsdev_io_pool_size": 65535, 00:23:06.650 "fsdev_io_cache_size": 256 00:23:06.650 } 00:23:06.650 } 00:23:06.650 ] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "keyring", 00:23:06.650 "config": [] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "iobuf", 00:23:06.650 "config": [ 00:23:06.650 { 00:23:06.650 "method": "iobuf_set_options", 00:23:06.650 "params": { 00:23:06.650 "small_pool_count": 8192, 00:23:06.650 "large_pool_count": 1024, 00:23:06.650 "small_bufsize": 8192, 00:23:06.650 "large_bufsize": 135168, 00:23:06.650 "enable_numa": false 00:23:06.650 } 00:23:06.650 } 00:23:06.650 ] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "sock", 00:23:06.650 "config": [ 00:23:06.650 { 00:23:06.650 "method": "sock_set_default_impl", 00:23:06.650 "params": { 00:23:06.650 "impl_name": "posix" 00:23:06.650 } 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "method": "sock_impl_set_options", 00:23:06.650 "params": { 00:23:06.650 "impl_name": "ssl", 00:23:06.650 "recv_buf_size": 4096, 00:23:06.650 "send_buf_size": 4096, 00:23:06.650 "enable_recv_pipe": true, 00:23:06.650 "enable_quickack": false, 00:23:06.650 "enable_placement_id": 0, 00:23:06.650 "enable_zerocopy_send_server": true, 00:23:06.650 "enable_zerocopy_send_client": false, 00:23:06.650 "zerocopy_threshold": 0, 00:23:06.650 "tls_version": 0, 00:23:06.650 "enable_ktls": false 00:23:06.650 } 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "method": "sock_impl_set_options", 00:23:06.650 "params": { 00:23:06.650 "impl_name": "posix", 00:23:06.650 "recv_buf_size": 2097152, 00:23:06.650 "send_buf_size": 2097152, 00:23:06.650 "enable_recv_pipe": true, 00:23:06.650 "enable_quickack": false, 00:23:06.650 "enable_placement_id": 0, 00:23:06.650 "enable_zerocopy_send_server": true, 00:23:06.650 "enable_zerocopy_send_client": false, 00:23:06.650 "zerocopy_threshold": 0, 00:23:06.650 "tls_version": 0, 00:23:06.650 "enable_ktls": false 00:23:06.650 } 00:23:06.650 } 00:23:06.650 ] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "vmd", 00:23:06.650 "config": [] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "accel", 00:23:06.650 "config": [ 00:23:06.650 { 00:23:06.650 "method": "accel_set_options", 00:23:06.650 "params": { 00:23:06.650 "small_cache_size": 128, 00:23:06.650 "large_cache_size": 16, 00:23:06.650 "task_count": 2048, 00:23:06.650 "sequence_count": 2048, 00:23:06.650 "buf_count": 2048 00:23:06.650 } 00:23:06.650 } 00:23:06.650 ] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "bdev", 00:23:06.650 "config": [ 00:23:06.650 { 00:23:06.650 "method": "bdev_set_options", 00:23:06.650 "params": { 00:23:06.650 "bdev_io_pool_size": 65535, 00:23:06.650 "bdev_io_cache_size": 256, 00:23:06.650 "bdev_auto_examine": true, 00:23:06.650 "iobuf_small_cache_size": 128, 00:23:06.650 "iobuf_large_cache_size": 16 00:23:06.650 } 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "method": "bdev_raid_set_options", 00:23:06.650 "params": { 00:23:06.650 "process_window_size_kb": 1024, 00:23:06.650 "process_max_bandwidth_mb_sec": 0 00:23:06.650 } 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "method": "bdev_iscsi_set_options", 00:23:06.650 "params": { 00:23:06.650 "timeout_sec": 30 00:23:06.650 } 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "method": "bdev_nvme_set_options", 00:23:06.650 "params": { 00:23:06.650 "action_on_timeout": "none", 00:23:06.650 "timeout_us": 0, 00:23:06.650 "timeout_admin_us": 0, 00:23:06.650 "keep_alive_timeout_ms": 10000, 00:23:06.650 "arbitration_burst": 0, 00:23:06.650 "low_priority_weight": 0, 00:23:06.650 "medium_priority_weight": 0, 00:23:06.650 "high_priority_weight": 0, 00:23:06.650 "nvme_adminq_poll_period_us": 10000, 00:23:06.650 "nvme_ioq_poll_period_us": 0, 00:23:06.650 "io_queue_requests": 0, 00:23:06.650 "delay_cmd_submit": true, 00:23:06.650 "transport_retry_count": 4, 00:23:06.650 "bdev_retry_count": 3, 00:23:06.650 "transport_ack_timeout": 0, 00:23:06.650 "ctrlr_loss_timeout_sec": 0, 00:23:06.650 "reconnect_delay_sec": 0, 00:23:06.650 "fast_io_fail_timeout_sec": 0, 00:23:06.650 "disable_auto_failback": false, 00:23:06.650 "generate_uuids": false, 00:23:06.650 "transport_tos": 0, 00:23:06.650 "nvme_error_stat": false, 00:23:06.650 "rdma_srq_size": 0, 00:23:06.650 "io_path_stat": false, 00:23:06.650 "allow_accel_sequence": false, 00:23:06.650 "rdma_max_cq_size": 0, 00:23:06.650 "rdma_cm_event_timeout_ms": 0, 00:23:06.650 "dhchap_digests": [ 00:23:06.650 "sha256", 00:23:06.650 "sha384", 00:23:06.650 "sha512" 00:23:06.650 ], 00:23:06.650 "dhchap_dhgroups": [ 00:23:06.650 "null", 00:23:06.650 "ffdhe2048", 00:23:06.650 "ffdhe3072", 00:23:06.650 "ffdhe4096", 00:23:06.650 "ffdhe6144", 00:23:06.650 "ffdhe8192" 00:23:06.650 ] 00:23:06.650 } 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "method": "bdev_nvme_set_hotplug", 00:23:06.650 "params": { 00:23:06.650 "period_us": 100000, 00:23:06.650 "enable": false 00:23:06.650 } 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "method": "bdev_malloc_create", 00:23:06.650 "params": { 00:23:06.650 "name": "malloc0", 00:23:06.650 "num_blocks": 8192, 00:23:06.650 "block_size": 4096, 00:23:06.650 "physical_block_size": 4096, 00:23:06.650 "uuid": "b35df6fb-0cd7-4f51-b460-6f9c11428690", 00:23:06.650 "optimal_io_boundary": 0, 00:23:06.650 "md_size": 0, 00:23:06.650 "dif_type": 0, 00:23:06.650 "dif_is_head_of_md": false, 00:23:06.650 "dif_pi_format": 0 00:23:06.650 } 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "method": "bdev_wait_for_examine" 00:23:06.650 } 00:23:06.650 ] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "scsi", 00:23:06.650 "config": null 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "scheduler", 00:23:06.650 "config": [ 00:23:06.650 { 00:23:06.650 "method": "framework_set_scheduler", 00:23:06.650 "params": { 00:23:06.650 "name": "static" 00:23:06.650 } 00:23:06.650 } 00:23:06.650 ] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "vhost_scsi", 00:23:06.650 "config": [] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "vhost_blk", 00:23:06.650 "config": [] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "ublk", 00:23:06.650 "config": [ 00:23:06.650 { 00:23:06.650 "method": "ublk_create_target", 00:23:06.650 "params": { 00:23:06.650 "cpumask": "1" 00:23:06.650 } 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "method": "ublk_start_disk", 00:23:06.650 "params": { 00:23:06.650 "bdev_name": "malloc0", 00:23:06.650 "ublk_id": 0, 00:23:06.650 "num_queues": 1, 00:23:06.650 "queue_depth": 128 00:23:06.650 } 00:23:06.650 } 00:23:06.650 ] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "nbd", 00:23:06.650 "config": [] 00:23:06.650 }, 00:23:06.650 { 00:23:06.650 "subsystem": "nvmf", 00:23:06.650 "config": [ 00:23:06.650 { 00:23:06.650 "method": "nvmf_set_config", 00:23:06.650 "params": { 00:23:06.650 "discovery_filter": "match_any", 00:23:06.651 "admin_cmd_passthru": { 00:23:06.651 "identify_ctrlr": false 00:23:06.651 }, 00:23:06.651 "dhchap_digests": [ 00:23:06.651 "sha256", 00:23:06.651 "sha384", 00:23:06.651 "sha512" 00:23:06.651 ], 00:23:06.651 "dhchap_dhgroups": [ 00:23:06.651 "null", 00:23:06.651 "ffdhe2048", 00:23:06.651 "ffdhe3072", 00:23:06.651 "ffdhe4096", 00:23:06.651 "ffdhe6144", 00:23:06.651 "ffdhe8192" 00:23:06.651 ] 00:23:06.651 } 00:23:06.651 }, 00:23:06.651 { 00:23:06.651 "method": "nvmf_set_max_subsystems", 00:23:06.651 "params": { 00:23:06.651 "max_subsystems": 1024 00:23:06.651 } 00:23:06.651 }, 00:23:06.651 { 00:23:06.651 "method": "nvmf_set_crdt", 00:23:06.651 "params": { 00:23:06.651 "crdt1": 0, 00:23:06.651 "crdt2": 0, 00:23:06.651 "crdt3": 0 00:23:06.651 } 00:23:06.651 } 00:23:06.651 ] 00:23:06.651 }, 00:23:06.651 { 00:23:06.651 "subsystem": "iscsi", 00:23:06.651 "config": [ 00:23:06.651 { 00:23:06.651 "method": "iscsi_set_options", 00:23:06.651 "params": { 00:23:06.651 "node_base": "iqn.2016-06.io.spdk", 00:23:06.651 "max_sessions": 128, 00:23:06.651 "max_connections_per_session": 2, 00:23:06.651 "max_queue_depth": 64, 00:23:06.651 "default_time2wait": 2, 00:23:06.651 "default_time2retain": 20, 00:23:06.651 "first_burst_length": 8192, 00:23:06.651 "immediate_data": true, 00:23:06.651 "allow_duplicated_isid": false, 00:23:06.651 "error_recovery_level": 0, 00:23:06.651 "nop_timeout": 60, 00:23:06.651 "nop_in_interval": 30, 00:23:06.651 "disable_chap": false, 00:23:06.651 "require_chap": false, 00:23:06.651 "mutual_chap": false, 00:23:06.651 "chap_group": 0, 00:23:06.651 "max_large_datain_per_connection": 64, 00:23:06.651 "max_r2t_per_connection": 4, 00:23:06.651 "pdu_pool_size": 36864, 00:23:06.651 "immediate_data_pool_size": 16384, 00:23:06.651 "data_out_pool_size": 2048 00:23:06.651 } 00:23:06.651 } 00:23:06.651 ] 00:23:06.651 } 00:23:06.651 ] 00:23:06.651 }' 00:23:06.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.651 10:29:00 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.651 10:29:00 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.651 10:29:00 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.651 10:29:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:06.651 [2024-11-25 10:29:00.707387] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:23:06.651 [2024-11-25 10:29:00.707551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75448 ] 00:23:06.651 [2024-11-25 10:29:00.883236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.909 [2024-11-25 10:29:01.023312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.843 [2024-11-25 10:29:02.061803] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:07.843 [2024-11-25 10:29:02.063046] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:07.843 [2024-11-25 10:29:02.069984] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:07.843 [2024-11-25 10:29:02.070109] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:07.843 [2024-11-25 10:29:02.070129] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:07.843 [2024-11-25 10:29:02.070139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:07.843 [2024-11-25 10:29:02.078908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:07.843 [2024-11-25 10:29:02.078951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:07.843 [2024-11-25 10:29:02.085826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:07.843 [2024-11-25 10:29:02.085995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:07.843 [2024-11-25 10:29:02.102810] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:07.843 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.843 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:07.843 10:29:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:23:07.843 10:29:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:23:07.843 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.843 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:07.843 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75448 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75448 ']' 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75448 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75448 00:23:08.101 killing process with pid 75448 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75448' 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75448 00:23:08.101 10:29:02 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75448 00:23:09.473 [2024-11-25 10:29:03.689249] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:09.474 [2024-11-25 10:29:03.728861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:09.474 [2024-11-25 10:29:03.729062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:09.474 [2024-11-25 10:29:03.736868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:09.474 [2024-11-25 10:29:03.736982] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:09.474 [2024-11-25 10:29:03.737007] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:09.474 [2024-11-25 10:29:03.737061] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:09.474 [2024-11-25 10:29:03.737331] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:11.376 10:29:05 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:23:11.376 00:23:11.376 real 0m9.982s 00:23:11.376 user 0m7.753s 00:23:11.376 sys 0m3.308s 00:23:11.376 ************************************ 00:23:11.376 END TEST test_save_ublk_config 00:23:11.376 ************************************ 00:23:11.376 10:29:05 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.376 10:29:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:11.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.376 10:29:05 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75531 00:23:11.376 10:29:05 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.376 10:29:05 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75531 00:23:11.376 10:29:05 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:23:11.376 10:29:05 ublk -- common/autotest_common.sh@835 -- # '[' -z 75531 ']' 00:23:11.376 10:29:05 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.376 10:29:05 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.376 10:29:05 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.376 10:29:05 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.376 10:29:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:11.376 [2024-11-25 10:29:05.668975] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:23:11.376 [2024-11-25 10:29:05.669332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75531 ] 00:23:11.651 [2024-11-25 10:29:05.852710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:11.909 [2024-11-25 10:29:06.026015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.909 [2024-11-25 10:29:06.026045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.845 10:29:06 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.845 10:29:06 ublk -- common/autotest_common.sh@868 -- # return 0 00:23:12.845 10:29:06 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:23:12.845 10:29:06 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:12.845 10:29:06 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.845 10:29:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:12.845 ************************************ 00:23:12.845 START TEST test_create_ublk 00:23:12.845 ************************************ 00:23:12.845 10:29:06 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:23:12.845 10:29:06 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:23:12.845 10:29:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.845 10:29:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:12.845 [2024-11-25 10:29:06.937801] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:12.845 [2024-11-25 10:29:06.940723] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:12.845 10:29:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.845 10:29:06 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:23:12.845 10:29:06 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:23:12.845 10:29:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.845 10:29:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:13.104 10:29:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:23:13.104 10:29:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.104 10:29:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:13.104 [2024-11-25 10:29:07.225066] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:23:13.104 [2024-11-25 10:29:07.225755] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:23:13.104 [2024-11-25 10:29:07.225819] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:13.104 [2024-11-25 10:29:07.225842] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:13.104 [2024-11-25 10:29:07.234282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:13.104 [2024-11-25 10:29:07.234314] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:13.104 [2024-11-25 10:29:07.240820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:13.104 [2024-11-25 10:29:07.251876] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:13.104 [2024-11-25 10:29:07.265914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:13.104 10:29:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:23:13.104 10:29:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.104 10:29:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:13.104 10:29:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:23:13.104 { 00:23:13.104 "ublk_device": "/dev/ublkb0", 00:23:13.104 "id": 0, 00:23:13.104 "queue_depth": 512, 00:23:13.104 "num_queues": 4, 00:23:13.104 "bdev_name": "Malloc0" 00:23:13.104 } 00:23:13.104 ]' 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:23:13.104 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:23:13.362 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:23:13.362 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:23:13.362 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:23:13.362 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:23:13.362 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:23:13.362 10:29:07 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:23:13.362 10:29:07 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:23:13.362 10:29:07 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:23:13.363 10:29:07 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:23:13.363 10:29:07 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:23:13.363 10:29:07 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:23:13.363 10:29:07 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:23:13.363 10:29:07 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:23:13.363 10:29:07 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:23:13.363 10:29:07 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:23:13.363 10:29:07 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:23:13.363 10:29:07 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:23:13.363 fio: verification read phase will never start because write phase uses all of runtime 00:23:13.363 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:23:13.363 fio-3.35 00:23:13.363 Starting 1 process 00:23:25.567 00:23:25.567 fio_test: (groupid=0, jobs=1): err= 0: pid=75585: Mon Nov 25 10:29:17 2024 00:23:25.567 write: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(400MiB/10001msec); 0 zone resets 00:23:25.567 clat (usec): min=57, max=9378, avg=96.27, stdev=168.52 00:23:25.567 lat (usec): min=57, max=9379, avg=97.06, stdev=168.54 00:23:25.567 clat percentiles (usec): 00:23:25.567 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 77], 20.00th=[ 79], 00:23:25.567 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:23:25.567 | 70.00th=[ 89], 80.00th=[ 93], 90.00th=[ 103], 95.00th=[ 114], 00:23:25.567 | 99.00th=[ 137], 99.50th=[ 165], 99.90th=[ 3326], 99.95th=[ 3589], 00:23:25.567 | 99.99th=[ 4047] 00:23:25.567 bw ( KiB/s): min=17432, max=44168, per=99.90%, avg=40906.53, stdev=5851.05, samples=19 00:23:25.567 iops : min= 4358, max=11042, avg=10226.63, stdev=1462.76, samples=19 00:23:25.567 lat (usec) : 100=87.97%, 250=11.60%, 500=0.02%, 750=0.01%, 1000=0.02% 00:23:25.567 lat (msec) : 2=0.12%, 4=0.25%, 10=0.01% 00:23:25.568 cpu : usr=2.78%, sys=7.32%, ctx=102385, majf=0, minf=796 00:23:25.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:25.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:25.568 issued rwts: total=0,102378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:25.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:25.568 00:23:25.568 Run status group 0 (all jobs): 00:23:25.568 WRITE: bw=40.0MiB/s (41.9MB/s), 40.0MiB/s-40.0MiB/s (41.9MB/s-41.9MB/s), io=400MiB (419MB), run=10001-10001msec 00:23:25.568 00:23:25.568 Disk stats (read/write): 00:23:25.568 ublkb0: ios=0/101239, merge=0/0, ticks=0/8932, in_queue=8933, util=99.01% 00:23:25.568 10:29:17 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 [2024-11-25 10:29:17.795247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:25.568 [2024-11-25 10:29:17.827422] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:25.568 [2024-11-25 10:29:17.828438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:25.568 [2024-11-25 10:29:17.834813] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:25.568 [2024-11-25 10:29:17.835135] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:25.568 [2024-11-25 10:29:17.835160] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.568 10:29:17 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 [2024-11-25 10:29:17.849943] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:23:25.568 request: 00:23:25.568 { 00:23:25.568 "ublk_id": 0, 00:23:25.568 "method": "ublk_stop_disk", 00:23:25.568 "req_id": 1 00:23:25.568 } 00:23:25.568 Got JSON-RPC error response 00:23:25.568 response: 00:23:25.568 { 00:23:25.568 "code": -19, 00:23:25.568 "message": "No such device" 00:23:25.568 } 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.568 10:29:17 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 [2024-11-25 10:29:17.865899] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:25.568 [2024-11-25 10:29:17.872877] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:25.568 [2024-11-25 10:29:17.872932] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.568 10:29:17 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 10:29:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.568 10:29:18 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:23:25.568 10:29:18 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:25.568 10:29:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 10:29:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.568 10:29:18 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:23:25.568 10:29:18 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:23:25.568 10:29:18 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:23:25.568 10:29:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:23:25.568 10:29:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 10:29:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.568 10:29:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:23:25.568 10:29:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:23:25.568 ************************************ 00:23:25.568 END TEST test_create_ublk 00:23:25.568 ************************************ 00:23:25.568 10:29:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:23:25.568 00:23:25.568 real 0m11.777s 00:23:25.568 user 0m0.725s 00:23:25.568 sys 0m0.841s 00:23:25.568 10:29:18 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.568 10:29:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 10:29:18 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:23:25.568 10:29:18 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:25.568 10:29:18 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.568 10:29:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 ************************************ 00:23:25.568 START TEST test_create_multi_ublk 00:23:25.568 ************************************ 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 [2024-11-25 10:29:18.770794] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:25.568 [2024-11-25 10:29:18.773549] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 [2024-11-25 10:29:19.069969] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:23:25.568 [2024-11-25 10:29:19.070520] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:23:25.568 [2024-11-25 10:29:19.070544] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:25.568 [2024-11-25 10:29:19.070561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:25.568 [2024-11-25 10:29:19.085810] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:25.568 [2024-11-25 10:29:19.085847] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:25.568 [2024-11-25 10:29:19.093810] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:25.568 [2024-11-25 10:29:19.094563] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:25.568 [2024-11-25 10:29:19.105914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.568 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.568 [2024-11-25 10:29:19.401963] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:23:25.568 [2024-11-25 10:29:19.402508] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:23:25.568 [2024-11-25 10:29:19.402536] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:23:25.568 [2024-11-25 10:29:19.402547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:23:25.568 [2024-11-25 10:29:19.411198] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:25.568 [2024-11-25 10:29:19.411227] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:25.569 [2024-11-25 10:29:19.417817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:25.569 [2024-11-25 10:29:19.418641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:23:25.569 [2024-11-25 10:29:19.434822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.569 [2024-11-25 10:29:19.727973] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:23:25.569 [2024-11-25 10:29:19.728498] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:23:25.569 [2024-11-25 10:29:19.728521] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:23:25.569 [2024-11-25 10:29:19.728535] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:23:25.569 [2024-11-25 10:29:19.737144] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:25.569 [2024-11-25 10:29:19.737181] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:25.569 [2024-11-25 10:29:19.743807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:25.569 [2024-11-25 10:29:19.744575] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:23:25.569 [2024-11-25 10:29:19.752835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.569 10:29:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.856 [2024-11-25 10:29:20.035983] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:23:25.856 [2024-11-25 10:29:20.036500] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:23:25.856 [2024-11-25 10:29:20.036527] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:23:25.856 [2024-11-25 10:29:20.036538] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:23:25.856 [2024-11-25 10:29:20.042804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:25.856 [2024-11-25 10:29:20.042839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:25.856 [2024-11-25 10:29:20.050806] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:25.856 [2024-11-25 10:29:20.051574] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:23:25.856 [2024-11-25 10:29:20.054695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:23:25.856 { 00:23:25.856 "ublk_device": "/dev/ublkb0", 00:23:25.856 "id": 0, 00:23:25.856 "queue_depth": 512, 00:23:25.856 "num_queues": 4, 00:23:25.856 "bdev_name": "Malloc0" 00:23:25.856 }, 00:23:25.856 { 00:23:25.856 "ublk_device": "/dev/ublkb1", 00:23:25.856 "id": 1, 00:23:25.856 "queue_depth": 512, 00:23:25.856 "num_queues": 4, 00:23:25.856 "bdev_name": "Malloc1" 00:23:25.856 }, 00:23:25.856 { 00:23:25.856 "ublk_device": "/dev/ublkb2", 00:23:25.856 "id": 2, 00:23:25.856 "queue_depth": 512, 00:23:25.856 "num_queues": 4, 00:23:25.856 "bdev_name": "Malloc2" 00:23:25.856 }, 00:23:25.856 { 00:23:25.856 "ublk_device": "/dev/ublkb3", 00:23:25.856 "id": 3, 00:23:25.856 "queue_depth": 512, 00:23:25.856 "num_queues": 4, 00:23:25.856 "bdev_name": "Malloc3" 00:23:25.856 } 00:23:25.856 ]' 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:23:25.856 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:23:26.113 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:23:26.371 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:23:26.629 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:23:26.887 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:26.887 10:29:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:26.887 [2024-11-25 10:29:21.097037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:26.887 [2024-11-25 10:29:21.130318] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:26.887 [2024-11-25 10:29:21.131651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:26.887 [2024-11-25 10:29:21.140866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:26.887 [2024-11-25 10:29:21.141195] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:26.887 [2024-11-25 10:29:21.141221] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:26.887 [2024-11-25 10:29:21.155896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:26.887 [2024-11-25 10:29:21.185407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:26.887 [2024-11-25 10:29:21.186595] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:26.887 [2024-11-25 10:29:21.195823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:26.887 [2024-11-25 10:29:21.196168] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:26.887 [2024-11-25 10:29:21.196196] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.887 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:26.887 [2024-11-25 10:29:21.211977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:23:27.145 [2024-11-25 10:29:21.238419] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:27.145 [2024-11-25 10:29:21.239533] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:23:27.145 [2024-11-25 10:29:21.243834] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:27.145 [2024-11-25 10:29:21.244167] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:23:27.145 [2024-11-25 10:29:21.244194] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:23:27.145 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.145 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:27.145 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:23:27.145 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.145 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:27.145 [2024-11-25 10:29:21.259966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:23:27.145 [2024-11-25 10:29:21.306853] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:27.145 [2024-11-25 10:29:21.307765] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:23:27.145 [2024-11-25 10:29:21.315832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:27.145 [2024-11-25 10:29:21.316261] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:23:27.145 [2024-11-25 10:29:21.316300] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:23:27.145 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.145 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:23:27.462 [2024-11-25 10:29:21.571924] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:27.462 [2024-11-25 10:29:21.579791] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:27.462 [2024-11-25 10:29:21.579861] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:27.462 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:23:27.462 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:27.462 10:29:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:27.462 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.462 10:29:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:28.027 10:29:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.027 10:29:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:28.027 10:29:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:28.027 10:29:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.027 10:29:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:28.957 10:29:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.957 10:29:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:28.957 10:29:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:23:28.957 10:29:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.957 10:29:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:29.214 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.214 10:29:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:29.214 10:29:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:23:29.214 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.214 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:29.472 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.472 10:29:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:23:29.472 10:29:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:29.472 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.472 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:29.472 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.472 10:29:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:23:29.472 10:29:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:23:29.730 ************************************ 00:23:29.730 END TEST test_create_multi_ublk 00:23:29.730 ************************************ 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:23:29.730 00:23:29.730 real 0m5.115s 00:23:29.730 user 0m1.267s 00:23:29.730 sys 0m0.189s 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.730 10:29:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:29.730 10:29:23 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:29.730 10:29:23 ublk -- ublk/ublk.sh@147 -- # cleanup 00:23:29.730 10:29:23 ublk -- ublk/ublk.sh@130 -- # killprocess 75531 00:23:29.730 10:29:23 ublk -- common/autotest_common.sh@954 -- # '[' -z 75531 ']' 00:23:29.731 10:29:23 ublk -- common/autotest_common.sh@958 -- # kill -0 75531 00:23:29.731 10:29:23 ublk -- common/autotest_common.sh@959 -- # uname 00:23:29.731 10:29:23 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.731 10:29:23 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75531 00:23:29.731 killing process with pid 75531 00:23:29.731 10:29:23 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:29.731 10:29:23 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:29.731 10:29:23 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75531' 00:23:29.731 10:29:23 ublk -- common/autotest_common.sh@973 -- # kill 75531 00:23:29.731 10:29:23 ublk -- common/autotest_common.sh@978 -- # wait 75531 00:23:31.118 [2024-11-25 10:29:25.012325] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:31.118 [2024-11-25 10:29:25.012388] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:32.052 ************************************ 00:23:32.052 END TEST ublk 00:23:32.052 ************************************ 00:23:32.052 00:23:32.052 real 0m30.957s 00:23:32.052 user 0m44.716s 00:23:32.052 sys 0m10.668s 00:23:32.052 10:29:26 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.052 10:29:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:32.052 10:29:26 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:23:32.052 10:29:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:32.052 10:29:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.052 10:29:26 -- common/autotest_common.sh@10 -- # set +x 00:23:32.052 ************************************ 00:23:32.052 START TEST ublk_recovery 00:23:32.052 ************************************ 00:23:32.052 10:29:26 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:23:32.311 * Looking for test storage... 00:23:32.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:23:32.311 10:29:26 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:32.311 10:29:26 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:32.311 10:29:26 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:32.311 10:29:26 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.311 10:29:26 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:23:32.311 10:29:26 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.311 10:29:26 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:32.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.311 --rc genhtml_branch_coverage=1 00:23:32.311 --rc genhtml_function_coverage=1 00:23:32.311 --rc genhtml_legend=1 00:23:32.311 --rc geninfo_all_blocks=1 00:23:32.311 --rc geninfo_unexecuted_blocks=1 00:23:32.311 00:23:32.311 ' 00:23:32.311 10:29:26 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:32.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.311 --rc genhtml_branch_coverage=1 00:23:32.311 --rc genhtml_function_coverage=1 00:23:32.311 --rc genhtml_legend=1 00:23:32.311 --rc geninfo_all_blocks=1 00:23:32.311 --rc geninfo_unexecuted_blocks=1 00:23:32.311 00:23:32.311 ' 00:23:32.311 10:29:26 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:32.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.311 --rc genhtml_branch_coverage=1 00:23:32.311 --rc genhtml_function_coverage=1 00:23:32.311 --rc genhtml_legend=1 00:23:32.312 --rc geninfo_all_blocks=1 00:23:32.312 --rc geninfo_unexecuted_blocks=1 00:23:32.312 00:23:32.312 ' 00:23:32.312 10:29:26 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:32.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.312 --rc genhtml_branch_coverage=1 00:23:32.312 --rc genhtml_function_coverage=1 00:23:32.312 --rc genhtml_legend=1 00:23:32.312 --rc geninfo_all_blocks=1 00:23:32.312 --rc geninfo_unexecuted_blocks=1 00:23:32.312 00:23:32.312 ' 00:23:32.312 10:29:26 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:23:32.312 10:29:26 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:23:32.312 10:29:26 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:23:32.312 10:29:26 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:23:32.312 10:29:26 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:23:32.312 10:29:26 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:23:32.312 10:29:26 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:23:32.312 10:29:26 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:23:32.312 10:29:26 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:23:32.312 10:29:26 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:23:32.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.312 10:29:26 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75957 00:23:32.312 10:29:26 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.312 10:29:26 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:23:32.312 10:29:26 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75957 00:23:32.312 10:29:26 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75957 ']' 00:23:32.312 10:29:26 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.312 10:29:26 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.312 10:29:26 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.312 10:29:26 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.312 10:29:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.312 [2024-11-25 10:29:26.641180] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:23:32.312 [2024-11-25 10:29:26.642022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75957 ] 00:23:32.570 [2024-11-25 10:29:26.854034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:32.829 [2024-11-25 10:29:27.015320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.829 [2024-11-25 10:29:27.015326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.764 10:29:27 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.764 10:29:27 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:23:33.764 10:29:27 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:23:33.764 10:29:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.764 10:29:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.764 [2024-11-25 10:29:27.984800] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:33.764 [2024-11-25 10:29:27.987732] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:33.764 10:29:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.764 10:29:27 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:23:33.764 10:29:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.764 10:29:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.026 malloc0 00:23:34.026 10:29:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.026 10:29:28 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:23:34.026 10:29:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.026 10:29:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.026 [2024-11-25 10:29:28.137005] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:23:34.026 [2024-11-25 10:29:28.137148] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:23:34.026 [2024-11-25 10:29:28.137169] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:23:34.027 [2024-11-25 10:29:28.137182] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:23:34.027 [2024-11-25 10:29:28.145965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:34.027 [2024-11-25 10:29:28.146010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:34.027 [2024-11-25 10:29:28.152816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:34.027 [2024-11-25 10:29:28.153011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:23:34.027 [2024-11-25 10:29:28.169806] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:23:34.027 1 00:23:34.027 10:29:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.027 10:29:28 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:23:34.980 10:29:29 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75998 00:23:34.980 10:29:29 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:23:34.980 10:29:29 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:23:34.980 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:34.980 fio-3.35 00:23:34.980 Starting 1 process 00:23:40.298 10:29:34 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75957 00:23:40.298 10:29:34 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:23:45.567 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75957 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:23:45.567 10:29:39 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76098 00:23:45.567 10:29:39 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.567 10:29:39 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:23:45.567 10:29:39 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76098 00:23:45.567 10:29:39 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76098 ']' 00:23:45.567 10:29:39 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.567 10:29:39 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.567 10:29:39 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.567 10:29:39 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.567 10:29:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.567 [2024-11-25 10:29:39.327698] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:23:45.567 [2024-11-25 10:29:39.328220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76098 ] 00:23:45.567 [2024-11-25 10:29:39.521049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:45.567 [2024-11-25 10:29:39.685222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.567 [2024-11-25 10:29:39.685233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:23:46.502 10:29:40 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.502 [2024-11-25 10:29:40.579820] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:46.502 [2024-11-25 10:29:40.582704] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.502 10:29:40 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.502 malloc0 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.502 10:29:40 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.502 [2024-11-25 10:29:40.734996] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:23:46.502 [2024-11-25 10:29:40.735052] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:23:46.502 [2024-11-25 10:29:40.735069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:23:46.502 [2024-11-25 10:29:40.739848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:23:46.502 [2024-11-25 10:29:40.739882] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:23:46.502 1 00:23:46.502 10:29:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.502 10:29:40 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75998 00:23:47.438 [2024-11-25 10:29:41.739920] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:23:47.438 [2024-11-25 10:29:41.744810] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:23:47.438 [2024-11-25 10:29:41.744856] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:23:48.421 [2024-11-25 10:29:42.744907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:23:48.421 [2024-11-25 10:29:42.748810] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:23:48.421 [2024-11-25 10:29:42.748838] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:23:49.797 [2024-11-25 10:29:43.748868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:23:49.797 [2024-11-25 10:29:43.756817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:23:49.797 [2024-11-25 10:29:43.756846] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:23:49.797 [2024-11-25 10:29:43.756862] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:23:49.797 [2024-11-25 10:29:43.756989] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:24:11.823 [2024-11-25 10:30:04.338820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:24:11.823 [2024-11-25 10:30:04.346542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:24:11.823 [2024-11-25 10:30:04.354217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:24:11.823 [2024-11-25 10:30:04.354264] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:24:38.392 00:24:38.392 fio_test: (groupid=0, jobs=1): err= 0: pid=76001: Mon Nov 25 10:30:29 2024 00:24:38.392 read: IOPS=9861, BW=38.5MiB/s (40.4MB/s)(2311MiB/60002msec) 00:24:38.392 slat (nsec): min=1715, max=975519, avg=6246.15, stdev=3275.73 00:24:38.392 clat (usec): min=744, max=30179k, avg=6586.17, stdev=321078.80 00:24:38.392 lat (usec): min=749, max=30179k, avg=6592.42, stdev=321078.82 00:24:38.392 clat percentiles (msec): 00:24:38.392 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:24:38.392 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 4], 60.00th=[ 4], 00:24:38.392 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:24:38.392 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 14], 00:24:38.392 | 99.99th=[17113] 00:24:38.392 bw ( KiB/s): min= 5696, max=83112, per=100.00%, avg=77709.83, stdev=12418.61, samples=60 00:24:38.392 iops : min= 1424, max=20778, avg=19427.42, stdev=3104.64, samples=60 00:24:38.392 write: IOPS=9851, BW=38.5MiB/s (40.4MB/s)(2309MiB/60002msec); 0 zone resets 00:24:38.392 slat (nsec): min=1840, max=306365, avg=6413.79, stdev=3147.95 00:24:38.392 clat (usec): min=746, max=30179k, avg=6385.84, stdev=306516.48 00:24:38.392 lat (usec): min=769, max=30179k, avg=6392.26, stdev=306516.50 00:24:38.392 clat percentiles (msec): 00:24:38.392 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:24:38.392 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:24:38.392 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:24:38.392 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 14], 00:24:38.392 | 99.99th=[17113] 00:24:38.392 bw ( KiB/s): min= 5624, max=83408, per=100.00%, avg=77644.90, stdev=12405.96, samples=60 00:24:38.392 iops : min= 1406, max=20852, avg=19411.22, stdev=3101.49, samples=60 00:24:38.392 lat (usec) : 750=0.01%, 1000=0.01% 00:24:38.392 lat (msec) : 2=0.07%, 4=94.69%, 10=5.15%, 20=0.07%, >=2000=0.01% 00:24:38.392 cpu : usr=5.22%, sys=11.70%, ctx=40076, majf=0, minf=13 00:24:38.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:24:38.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:38.392 issued rwts: total=591689,591109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:38.392 00:24:38.392 Run status group 0 (all jobs): 00:24:38.392 READ: bw=38.5MiB/s (40.4MB/s), 38.5MiB/s-38.5MiB/s (40.4MB/s-40.4MB/s), io=2311MiB (2424MB), run=60002-60002msec 00:24:38.392 WRITE: bw=38.5MiB/s (40.4MB/s), 38.5MiB/s-38.5MiB/s (40.4MB/s-40.4MB/s), io=2309MiB (2421MB), run=60002-60002msec 00:24:38.392 00:24:38.392 Disk stats (read/write): 00:24:38.392 ublkb1: ios=589434/588840, merge=0/0, ticks=3839448/3652740, in_queue=7492189, util=99.93% 00:24:38.392 10:30:29 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 [2024-11-25 10:30:29.443259] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:24:38.392 [2024-11-25 10:30:29.473959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:38.392 [2024-11-25 10:30:29.474215] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:24:38.392 [2024-11-25 10:30:29.481822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:38.392 [2024-11-25 10:30:29.482083] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:24:38.392 [2024-11-25 10:30:29.482219] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.392 10:30:29 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 [2024-11-25 10:30:29.497972] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:38.392 [2024-11-25 10:30:29.505793] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:38.392 [2024-11-25 10:30:29.505851] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.392 10:30:29 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:24:38.392 10:30:29 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:24:38.392 10:30:29 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76098 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76098 ']' 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76098 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76098 00:24:38.392 killing process with pid 76098 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76098' 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76098 00:24:38.392 10:30:29 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76098 00:24:38.392 [2024-11-25 10:30:31.073372] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:38.392 [2024-11-25 10:30:31.073661] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:38.392 00:24:38.392 real 1m6.087s 00:24:38.392 user 1m51.377s 00:24:38.392 sys 0m20.161s 00:24:38.392 ************************************ 00:24:38.392 END TEST ublk_recovery 00:24:38.392 ************************************ 00:24:38.392 10:30:32 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.392 10:30:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 10:30:32 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:24:38.392 10:30:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:38.392 10:30:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.392 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 10:30:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:24:38.392 10:30:32 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:24:38.392 10:30:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:38.392 10:30:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.392 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 ************************************ 00:24:38.392 START TEST ftl 00:24:38.392 ************************************ 00:24:38.392 10:30:32 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:24:38.392 * Looking for test storage... 00:24:38.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:38.392 10:30:32 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.392 10:30:32 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.392 10:30:32 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.393 10:30:32 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.393 10:30:32 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.393 10:30:32 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.393 10:30:32 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.393 10:30:32 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.393 10:30:32 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.393 10:30:32 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.393 10:30:32 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.393 10:30:32 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.393 10:30:32 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.393 10:30:32 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.393 10:30:32 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.393 10:30:32 ftl -- scripts/common.sh@344 -- # case "$op" in 00:24:38.393 10:30:32 ftl -- scripts/common.sh@345 -- # : 1 00:24:38.393 10:30:32 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.393 10:30:32 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.393 10:30:32 ftl -- scripts/common.sh@365 -- # decimal 1 00:24:38.393 10:30:32 ftl -- scripts/common.sh@353 -- # local d=1 00:24:38.393 10:30:32 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.393 10:30:32 ftl -- scripts/common.sh@355 -- # echo 1 00:24:38.393 10:30:32 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.393 10:30:32 ftl -- scripts/common.sh@366 -- # decimal 2 00:24:38.393 10:30:32 ftl -- scripts/common.sh@353 -- # local d=2 00:24:38.393 10:30:32 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.393 10:30:32 ftl -- scripts/common.sh@355 -- # echo 2 00:24:38.393 10:30:32 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.393 10:30:32 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.393 10:30:32 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.393 10:30:32 ftl -- scripts/common.sh@368 -- # return 0 00:24:38.393 10:30:32 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.393 10:30:32 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.393 --rc genhtml_branch_coverage=1 00:24:38.393 --rc genhtml_function_coverage=1 00:24:38.393 --rc genhtml_legend=1 00:24:38.393 --rc geninfo_all_blocks=1 00:24:38.393 --rc geninfo_unexecuted_blocks=1 00:24:38.393 00:24:38.393 ' 00:24:38.393 10:30:32 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.393 --rc genhtml_branch_coverage=1 00:24:38.393 --rc genhtml_function_coverage=1 00:24:38.393 --rc genhtml_legend=1 00:24:38.393 --rc geninfo_all_blocks=1 00:24:38.393 --rc geninfo_unexecuted_blocks=1 00:24:38.393 00:24:38.393 ' 00:24:38.393 10:30:32 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.393 --rc genhtml_branch_coverage=1 00:24:38.393 --rc genhtml_function_coverage=1 00:24:38.393 --rc genhtml_legend=1 00:24:38.393 --rc geninfo_all_blocks=1 00:24:38.393 --rc geninfo_unexecuted_blocks=1 00:24:38.393 00:24:38.393 ' 00:24:38.393 10:30:32 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.393 --rc genhtml_branch_coverage=1 00:24:38.393 --rc genhtml_function_coverage=1 00:24:38.393 --rc genhtml_legend=1 00:24:38.393 --rc geninfo_all_blocks=1 00:24:38.393 --rc geninfo_unexecuted_blocks=1 00:24:38.393 00:24:38.393 ' 00:24:38.393 10:30:32 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:38.393 10:30:32 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:24:38.393 10:30:32 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:38.393 10:30:32 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:38.393 10:30:32 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:38.393 10:30:32 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:38.393 10:30:32 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.393 10:30:32 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:38.393 10:30:32 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:38.393 10:30:32 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.393 10:30:32 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.393 10:30:32 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:38.393 10:30:32 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:38.393 10:30:32 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:38.393 10:30:32 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:38.393 10:30:32 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:38.393 10:30:32 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:38.393 10:30:32 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.393 10:30:32 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.393 10:30:32 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:38.393 10:30:32 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:38.393 10:30:32 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:38.393 10:30:32 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:38.393 10:30:32 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:38.393 10:30:32 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:38.393 10:30:32 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:38.393 10:30:32 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:38.393 10:30:32 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:38.393 10:30:32 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:38.393 10:30:32 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.393 10:30:32 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:24:38.393 10:30:32 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:24:38.393 10:30:32 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:24:38.393 10:30:32 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:24:38.393 10:30:32 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:38.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:38.960 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:38.960 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:38.960 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:38.960 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:38.960 10:30:33 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76889 00:24:38.960 10:30:33 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:24:38.960 10:30:33 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76889 00:24:38.960 10:30:33 ftl -- common/autotest_common.sh@835 -- # '[' -z 76889 ']' 00:24:38.960 10:30:33 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.960 10:30:33 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.960 10:30:33 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.960 10:30:33 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.960 10:30:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:39.219 [2024-11-25 10:30:33.327517] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:24:39.219 [2024-11-25 10:30:33.327714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76889 ] 00:24:39.219 [2024-11-25 10:30:33.507903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.477 [2024-11-25 10:30:33.638401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.044 10:30:34 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.044 10:30:34 ftl -- common/autotest_common.sh@868 -- # return 0 00:24:40.044 10:30:34 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:24:40.303 10:30:34 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:24:41.679 10:30:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:24:41.679 10:30:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:42.245 10:30:36 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:24:42.245 10:30:36 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:24:42.245 10:30:36 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:24:42.503 10:30:36 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:24:42.503 10:30:36 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:24:42.503 10:30:36 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:24:42.503 10:30:36 ftl -- ftl/ftl.sh@50 -- # break 00:24:42.503 10:30:36 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:24:42.503 10:30:36 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:24:42.503 10:30:36 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:24:42.503 10:30:36 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:24:42.762 10:30:36 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:24:42.762 10:30:36 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:24:42.762 10:30:36 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:24:42.762 10:30:36 ftl -- ftl/ftl.sh@63 -- # break 00:24:42.762 10:30:36 ftl -- ftl/ftl.sh@66 -- # killprocess 76889 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@954 -- # '[' -z 76889 ']' 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@958 -- # kill -0 76889 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@959 -- # uname 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76889 00:24:42.762 killing process with pid 76889 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76889' 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@973 -- # kill 76889 00:24:42.762 10:30:36 ftl -- common/autotest_common.sh@978 -- # wait 76889 00:24:45.301 10:30:39 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:24:45.301 10:30:39 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:24:45.301 10:30:39 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:45.301 10:30:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.301 10:30:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:45.301 ************************************ 00:24:45.301 START TEST ftl_fio_basic 00:24:45.301 ************************************ 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:24:45.301 * Looking for test storage... 00:24:45.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:45.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.301 --rc genhtml_branch_coverage=1 00:24:45.301 --rc genhtml_function_coverage=1 00:24:45.301 --rc genhtml_legend=1 00:24:45.301 --rc geninfo_all_blocks=1 00:24:45.301 --rc geninfo_unexecuted_blocks=1 00:24:45.301 00:24:45.301 ' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:45.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.301 --rc genhtml_branch_coverage=1 00:24:45.301 --rc genhtml_function_coverage=1 00:24:45.301 --rc genhtml_legend=1 00:24:45.301 --rc geninfo_all_blocks=1 00:24:45.301 --rc geninfo_unexecuted_blocks=1 00:24:45.301 00:24:45.301 ' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:45.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.301 --rc genhtml_branch_coverage=1 00:24:45.301 --rc genhtml_function_coverage=1 00:24:45.301 --rc genhtml_legend=1 00:24:45.301 --rc geninfo_all_blocks=1 00:24:45.301 --rc geninfo_unexecuted_blocks=1 00:24:45.301 00:24:45.301 ' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:45.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.301 --rc genhtml_branch_coverage=1 00:24:45.301 --rc genhtml_function_coverage=1 00:24:45.301 --rc genhtml_legend=1 00:24:45.301 --rc geninfo_all_blocks=1 00:24:45.301 --rc geninfo_unexecuted_blocks=1 00:24:45.301 00:24:45.301 ' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:45.301 10:30:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77038 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77038 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77038 ']' 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.302 10:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:45.302 [2024-11-25 10:30:39.468040] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:24:45.302 [2024-11-25 10:30:39.468224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77038 ] 00:24:45.560 [2024-11-25 10:30:39.651894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:45.560 [2024-11-25 10:30:39.788893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.560 [2024-11-25 10:30:39.789021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.560 [2024-11-25 10:30:39.789050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.495 10:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.495 10:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:24:46.495 10:30:40 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:46.495 10:30:40 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:24:46.495 10:30:40 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:46.495 10:30:40 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:24:46.495 10:30:40 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:24:46.495 10:30:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:46.753 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:46.753 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:24:46.753 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:46.753 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:46.753 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:46.753 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:24:46.753 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:24:46.753 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:47.317 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:47.317 { 00:24:47.317 "name": "nvme0n1", 00:24:47.317 "aliases": [ 00:24:47.317 "3e54ecb4-0e1a-4da1-a0e5-5f4f3124236a" 00:24:47.317 ], 00:24:47.317 "product_name": "NVMe disk", 00:24:47.317 "block_size": 4096, 00:24:47.317 "num_blocks": 1310720, 00:24:47.317 "uuid": "3e54ecb4-0e1a-4da1-a0e5-5f4f3124236a", 00:24:47.317 "numa_id": -1, 00:24:47.317 "assigned_rate_limits": { 00:24:47.317 "rw_ios_per_sec": 0, 00:24:47.317 "rw_mbytes_per_sec": 0, 00:24:47.317 "r_mbytes_per_sec": 0, 00:24:47.318 "w_mbytes_per_sec": 0 00:24:47.318 }, 00:24:47.318 "claimed": false, 00:24:47.318 "zoned": false, 00:24:47.318 "supported_io_types": { 00:24:47.318 "read": true, 00:24:47.318 "write": true, 00:24:47.318 "unmap": true, 00:24:47.318 "flush": true, 00:24:47.318 "reset": true, 00:24:47.318 "nvme_admin": true, 00:24:47.318 "nvme_io": true, 00:24:47.318 "nvme_io_md": false, 00:24:47.318 "write_zeroes": true, 00:24:47.318 "zcopy": false, 00:24:47.318 "get_zone_info": false, 00:24:47.318 "zone_management": false, 00:24:47.318 "zone_append": false, 00:24:47.318 "compare": true, 00:24:47.318 "compare_and_write": false, 00:24:47.318 "abort": true, 00:24:47.318 "seek_hole": false, 00:24:47.318 "seek_data": false, 00:24:47.318 "copy": true, 00:24:47.318 "nvme_iov_md": false 00:24:47.318 }, 00:24:47.318 "driver_specific": { 00:24:47.318 "nvme": [ 00:24:47.318 { 00:24:47.318 "pci_address": "0000:00:11.0", 00:24:47.318 "trid": { 00:24:47.318 "trtype": "PCIe", 00:24:47.318 "traddr": "0000:00:11.0" 00:24:47.318 }, 00:24:47.318 "ctrlr_data": { 00:24:47.318 "cntlid": 0, 00:24:47.318 "vendor_id": "0x1b36", 00:24:47.318 "model_number": "QEMU NVMe Ctrl", 00:24:47.318 "serial_number": "12341", 00:24:47.318 "firmware_revision": "8.0.0", 00:24:47.318 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:47.318 "oacs": { 00:24:47.318 "security": 0, 00:24:47.318 "format": 1, 00:24:47.318 "firmware": 0, 00:24:47.318 "ns_manage": 1 00:24:47.318 }, 00:24:47.318 "multi_ctrlr": false, 00:24:47.318 "ana_reporting": false 00:24:47.318 }, 00:24:47.318 "vs": { 00:24:47.318 "nvme_version": "1.4" 00:24:47.318 }, 00:24:47.318 "ns_data": { 00:24:47.318 "id": 1, 00:24:47.318 "can_share": false 00:24:47.318 } 00:24:47.318 } 00:24:47.318 ], 00:24:47.318 "mp_policy": "active_passive" 00:24:47.318 } 00:24:47.318 } 00:24:47.318 ]' 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:47.318 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:47.576 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:24:47.576 10:30:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:47.835 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=cd19c53b-1b36-42ce-a5ea-2be9307bb617 00:24:47.835 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cd19c53b-1b36-42ce-a5ea-2be9307bb617 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e43776dd-4131-4865-ab3d-c655d9519941 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e43776dd-4131-4865-ab3d-c655d9519941 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e43776dd-4131-4865-ab3d-c655d9519941 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e43776dd-4131-4865-ab3d-c655d9519941 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e43776dd-4131-4865-ab3d-c655d9519941 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:24:48.093 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:24:48.352 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e43776dd-4131-4865-ab3d-c655d9519941 00:24:48.611 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:48.611 { 00:24:48.611 "name": "e43776dd-4131-4865-ab3d-c655d9519941", 00:24:48.611 "aliases": [ 00:24:48.611 "lvs/nvme0n1p0" 00:24:48.612 ], 00:24:48.612 "product_name": "Logical Volume", 00:24:48.612 "block_size": 4096, 00:24:48.612 "num_blocks": 26476544, 00:24:48.612 "uuid": "e43776dd-4131-4865-ab3d-c655d9519941", 00:24:48.612 "assigned_rate_limits": { 00:24:48.612 "rw_ios_per_sec": 0, 00:24:48.612 "rw_mbytes_per_sec": 0, 00:24:48.612 "r_mbytes_per_sec": 0, 00:24:48.612 "w_mbytes_per_sec": 0 00:24:48.612 }, 00:24:48.612 "claimed": false, 00:24:48.612 "zoned": false, 00:24:48.612 "supported_io_types": { 00:24:48.612 "read": true, 00:24:48.612 "write": true, 00:24:48.612 "unmap": true, 00:24:48.612 "flush": false, 00:24:48.612 "reset": true, 00:24:48.612 "nvme_admin": false, 00:24:48.612 "nvme_io": false, 00:24:48.612 "nvme_io_md": false, 00:24:48.612 "write_zeroes": true, 00:24:48.612 "zcopy": false, 00:24:48.612 "get_zone_info": false, 00:24:48.612 "zone_management": false, 00:24:48.612 "zone_append": false, 00:24:48.612 "compare": false, 00:24:48.612 "compare_and_write": false, 00:24:48.612 "abort": false, 00:24:48.612 "seek_hole": true, 00:24:48.612 "seek_data": true, 00:24:48.612 "copy": false, 00:24:48.612 "nvme_iov_md": false 00:24:48.612 }, 00:24:48.612 "driver_specific": { 00:24:48.612 "lvol": { 00:24:48.612 "lvol_store_uuid": "cd19c53b-1b36-42ce-a5ea-2be9307bb617", 00:24:48.612 "base_bdev": "nvme0n1", 00:24:48.612 "thin_provision": true, 00:24:48.612 "num_allocated_clusters": 0, 00:24:48.612 "snapshot": false, 00:24:48.612 "clone": false, 00:24:48.612 "esnap_clone": false 00:24:48.612 } 00:24:48.612 } 00:24:48.612 } 00:24:48.612 ]' 00:24:48.612 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:48.612 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:24:48.612 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:48.612 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:48.612 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:48.612 10:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:24:48.612 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:24:48.612 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:24:48.612 10:30:42 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:48.887 10:30:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:48.887 10:30:43 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:48.887 10:30:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e43776dd-4131-4865-ab3d-c655d9519941 00:24:48.887 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e43776dd-4131-4865-ab3d-c655d9519941 00:24:48.887 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:48.887 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:24:48.887 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:24:48.887 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e43776dd-4131-4865-ab3d-c655d9519941 00:24:49.169 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:49.169 { 00:24:49.169 "name": "e43776dd-4131-4865-ab3d-c655d9519941", 00:24:49.169 "aliases": [ 00:24:49.169 "lvs/nvme0n1p0" 00:24:49.169 ], 00:24:49.169 "product_name": "Logical Volume", 00:24:49.169 "block_size": 4096, 00:24:49.169 "num_blocks": 26476544, 00:24:49.170 "uuid": "e43776dd-4131-4865-ab3d-c655d9519941", 00:24:49.170 "assigned_rate_limits": { 00:24:49.170 "rw_ios_per_sec": 0, 00:24:49.170 "rw_mbytes_per_sec": 0, 00:24:49.170 "r_mbytes_per_sec": 0, 00:24:49.170 "w_mbytes_per_sec": 0 00:24:49.170 }, 00:24:49.170 "claimed": false, 00:24:49.170 "zoned": false, 00:24:49.170 "supported_io_types": { 00:24:49.170 "read": true, 00:24:49.170 "write": true, 00:24:49.170 "unmap": true, 00:24:49.170 "flush": false, 00:24:49.170 "reset": true, 00:24:49.170 "nvme_admin": false, 00:24:49.170 "nvme_io": false, 00:24:49.170 "nvme_io_md": false, 00:24:49.170 "write_zeroes": true, 00:24:49.170 "zcopy": false, 00:24:49.170 "get_zone_info": false, 00:24:49.170 "zone_management": false, 00:24:49.170 "zone_append": false, 00:24:49.170 "compare": false, 00:24:49.170 "compare_and_write": false, 00:24:49.170 "abort": false, 00:24:49.170 "seek_hole": true, 00:24:49.170 "seek_data": true, 00:24:49.170 "copy": false, 00:24:49.170 "nvme_iov_md": false 00:24:49.170 }, 00:24:49.170 "driver_specific": { 00:24:49.170 "lvol": { 00:24:49.170 "lvol_store_uuid": "cd19c53b-1b36-42ce-a5ea-2be9307bb617", 00:24:49.170 "base_bdev": "nvme0n1", 00:24:49.170 "thin_provision": true, 00:24:49.170 "num_allocated_clusters": 0, 00:24:49.170 "snapshot": false, 00:24:49.170 "clone": false, 00:24:49.170 "esnap_clone": false 00:24:49.170 } 00:24:49.170 } 00:24:49.170 } 00:24:49.170 ]' 00:24:49.170 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:49.428 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:24:49.428 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:49.428 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:49.428 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:49.428 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:24:49.428 10:30:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:24:49.428 10:30:43 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:49.686 10:30:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:24:49.686 10:30:43 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:24:49.686 10:30:43 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:24:49.686 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:24:49.686 10:30:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e43776dd-4131-4865-ab3d-c655d9519941 00:24:49.686 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e43776dd-4131-4865-ab3d-c655d9519941 00:24:49.686 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:49.686 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:24:49.686 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:24:49.686 10:30:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e43776dd-4131-4865-ab3d-c655d9519941 00:24:49.945 10:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:49.945 { 00:24:49.945 "name": "e43776dd-4131-4865-ab3d-c655d9519941", 00:24:49.945 "aliases": [ 00:24:49.945 "lvs/nvme0n1p0" 00:24:49.945 ], 00:24:49.945 "product_name": "Logical Volume", 00:24:49.945 "block_size": 4096, 00:24:49.945 "num_blocks": 26476544, 00:24:49.945 "uuid": "e43776dd-4131-4865-ab3d-c655d9519941", 00:24:49.945 "assigned_rate_limits": { 00:24:49.945 "rw_ios_per_sec": 0, 00:24:49.945 "rw_mbytes_per_sec": 0, 00:24:49.945 "r_mbytes_per_sec": 0, 00:24:49.945 "w_mbytes_per_sec": 0 00:24:49.945 }, 00:24:49.945 "claimed": false, 00:24:49.945 "zoned": false, 00:24:49.945 "supported_io_types": { 00:24:49.945 "read": true, 00:24:49.945 "write": true, 00:24:49.945 "unmap": true, 00:24:49.945 "flush": false, 00:24:49.945 "reset": true, 00:24:49.945 "nvme_admin": false, 00:24:49.945 "nvme_io": false, 00:24:49.945 "nvme_io_md": false, 00:24:49.945 "write_zeroes": true, 00:24:49.945 "zcopy": false, 00:24:49.945 "get_zone_info": false, 00:24:49.945 "zone_management": false, 00:24:49.945 "zone_append": false, 00:24:49.945 "compare": false, 00:24:49.945 "compare_and_write": false, 00:24:49.945 "abort": false, 00:24:49.945 "seek_hole": true, 00:24:49.945 "seek_data": true, 00:24:49.945 "copy": false, 00:24:49.945 "nvme_iov_md": false 00:24:49.945 }, 00:24:49.945 "driver_specific": { 00:24:49.945 "lvol": { 00:24:49.945 "lvol_store_uuid": "cd19c53b-1b36-42ce-a5ea-2be9307bb617", 00:24:49.945 "base_bdev": "nvme0n1", 00:24:49.945 "thin_provision": true, 00:24:49.945 "num_allocated_clusters": 0, 00:24:49.945 "snapshot": false, 00:24:49.945 "clone": false, 00:24:49.945 "esnap_clone": false 00:24:49.945 } 00:24:49.945 } 00:24:49.945 } 00:24:49.945 ]' 00:24:49.945 10:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:49.945 10:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:24:49.945 10:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:50.204 10:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:50.204 10:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:50.204 10:30:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:24:50.204 10:30:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:24:50.204 10:30:44 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:24:50.204 10:30:44 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e43776dd-4131-4865-ab3d-c655d9519941 -c nvc0n1p0 --l2p_dram_limit 60 00:24:50.465 [2024-11-25 10:30:44.556067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.465 [2024-11-25 10:30:44.556133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:50.465 [2024-11-25 10:30:44.556164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:50.465 [2024-11-25 10:30:44.556178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.465 [2024-11-25 10:30:44.556289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.465 [2024-11-25 10:30:44.556314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.465 [2024-11-25 10:30:44.556331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:50.465 [2024-11-25 10:30:44.556343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.465 [2024-11-25 10:30:44.556402] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:50.465 [2024-11-25 10:30:44.557451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:50.465 [2024-11-25 10:30:44.557494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.465 [2024-11-25 10:30:44.557508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.465 [2024-11-25 10:30:44.557524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:24:50.465 [2024-11-25 10:30:44.557536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.465 [2024-11-25 10:30:44.557713] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a9a414f7-7948-4d2f-a864-7415d36041d6 00:24:50.465 [2024-11-25 10:30:44.559813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.465 [2024-11-25 10:30:44.559862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:50.465 [2024-11-25 10:30:44.559880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:50.465 [2024-11-25 10:30:44.559895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.465 [2024-11-25 10:30:44.569643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.465 [2024-11-25 10:30:44.569878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.465 [2024-11-25 10:30:44.570026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.614 ms 00:24:50.465 [2024-11-25 10:30:44.570177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.465 [2024-11-25 10:30:44.570401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.465 [2024-11-25 10:30:44.570486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.465 [2024-11-25 10:30:44.570625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:50.465 [2024-11-25 10:30:44.570690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.465 [2024-11-25 10:30:44.570987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.465 [2024-11-25 10:30:44.571159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:50.465 [2024-11-25 10:30:44.571307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:50.465 [2024-11-25 10:30:44.571441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.465 [2024-11-25 10:30:44.571536] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:50.465 [2024-11-25 10:30:44.576960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.465 [2024-11-25 10:30:44.577123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.465 [2024-11-25 10:30:44.577268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.432 ms 00:24:50.465 [2024-11-25 10:30:44.577401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.465 [2024-11-25 10:30:44.577530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.465 [2024-11-25 10:30:44.577612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:50.465 [2024-11-25 10:30:44.577739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:50.465 [2024-11-25 10:30:44.577892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.465 [2024-11-25 10:30:44.578004] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:50.466 [2024-11-25 10:30:44.578342] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:50.466 [2024-11-25 10:30:44.578536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:50.466 [2024-11-25 10:30:44.578707] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:50.466 [2024-11-25 10:30:44.578867] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:50.466 [2024-11-25 10:30:44.579031] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:50.466 [2024-11-25 10:30:44.579199] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:50.466 [2024-11-25 10:30:44.579372] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:50.466 [2024-11-25 10:30:44.579430] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:50.466 [2024-11-25 10:30:44.579549] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:50.466 [2024-11-25 10:30:44.579612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.466 [2024-11-25 10:30:44.579694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:50.466 [2024-11-25 10:30:44.579841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:24:50.466 [2024-11-25 10:30:44.579981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.466 [2024-11-25 10:30:44.580108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.466 [2024-11-25 10:30:44.580130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:50.466 [2024-11-25 10:30:44.580148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:50.466 [2024-11-25 10:30:44.580160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.466 [2024-11-25 10:30:44.580293] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:50.466 [2024-11-25 10:30:44.580315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:50.466 [2024-11-25 10:30:44.580337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.466 [2024-11-25 10:30:44.580349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:50.466 [2024-11-25 10:30:44.580374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:50.466 [2024-11-25 10:30:44.580398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:50.466 [2024-11-25 10:30:44.580411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.466 [2024-11-25 10:30:44.580435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:50.466 [2024-11-25 10:30:44.580445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:50.466 [2024-11-25 10:30:44.580458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.466 [2024-11-25 10:30:44.580469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:50.466 [2024-11-25 10:30:44.580482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:50.466 [2024-11-25 10:30:44.580493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:50.466 [2024-11-25 10:30:44.580522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:50.466 [2024-11-25 10:30:44.580535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:50.466 [2024-11-25 10:30:44.580560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.466 [2024-11-25 10:30:44.580583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:50.466 [2024-11-25 10:30:44.580594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.466 [2024-11-25 10:30:44.580618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:50.466 [2024-11-25 10:30:44.580630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.466 [2024-11-25 10:30:44.580654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:50.466 [2024-11-25 10:30:44.580664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.466 [2024-11-25 10:30:44.580688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:50.466 [2024-11-25 10:30:44.580703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.466 [2024-11-25 10:30:44.580728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:50.466 [2024-11-25 10:30:44.580758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:50.466 [2024-11-25 10:30:44.580787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.466 [2024-11-25 10:30:44.580802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:50.466 [2024-11-25 10:30:44.580815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:50.466 [2024-11-25 10:30:44.580825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:50.466 [2024-11-25 10:30:44.580857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:50.466 [2024-11-25 10:30:44.580872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580883] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:50.466 [2024-11-25 10:30:44.580898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:50.466 [2024-11-25 10:30:44.580910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.466 [2024-11-25 10:30:44.580924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.466 [2024-11-25 10:30:44.580937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:50.466 [2024-11-25 10:30:44.580954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:50.466 [2024-11-25 10:30:44.580965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:50.466 [2024-11-25 10:30:44.580979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:50.466 [2024-11-25 10:30:44.580989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:50.466 [2024-11-25 10:30:44.581003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:50.466 [2024-11-25 10:30:44.581020] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:50.466 [2024-11-25 10:30:44.581038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.466 [2024-11-25 10:30:44.581052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:50.466 [2024-11-25 10:30:44.581067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:50.466 [2024-11-25 10:30:44.581078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:50.466 [2024-11-25 10:30:44.581093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:50.466 [2024-11-25 10:30:44.581104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:50.466 [2024-11-25 10:30:44.581119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:50.466 [2024-11-25 10:30:44.581130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:50.466 [2024-11-25 10:30:44.581145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:50.466 [2024-11-25 10:30:44.581156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:50.466 [2024-11-25 10:30:44.581174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:50.466 [2024-11-25 10:30:44.581186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:50.466 [2024-11-25 10:30:44.581202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:50.466 [2024-11-25 10:30:44.581214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:50.466 [2024-11-25 10:30:44.581229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:50.467 [2024-11-25 10:30:44.581240] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:50.467 [2024-11-25 10:30:44.581262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.467 [2024-11-25 10:30:44.581280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:50.467 [2024-11-25 10:30:44.581295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:50.467 [2024-11-25 10:30:44.581313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:50.467 [2024-11-25 10:30:44.581328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:50.467 [2024-11-25 10:30:44.581341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.467 [2024-11-25 10:30:44.581357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:50.467 [2024-11-25 10:30:44.581369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:24:50.467 [2024-11-25 10:30:44.581383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.467 [2024-11-25 10:30:44.581494] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:50.467 [2024-11-25 10:30:44.581519] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:54.668 [2024-11-25 10:30:48.738116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.668 [2024-11-25 10:30:48.738417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:54.668 [2024-11-25 10:30:48.738567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4156.643 ms 00:24:54.668 [2024-11-25 10:30:48.738710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.668 [2024-11-25 10:30:48.780452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.668 [2024-11-25 10:30:48.780684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.668 [2024-11-25 10:30:48.780851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.321 ms 00:24:54.668 [2024-11-25 10:30:48.780997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.668 [2024-11-25 10:30:48.781320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.668 [2024-11-25 10:30:48.781486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:54.668 [2024-11-25 10:30:48.781627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:54.668 [2024-11-25 10:30:48.781844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.668 [2024-11-25 10:30:48.840689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.668 [2024-11-25 10:30:48.841017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.668 [2024-11-25 10:30:48.841166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.614 ms 00:24:54.668 [2024-11-25 10:30:48.841335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.668 [2024-11-25 10:30:48.841455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.668 [2024-11-25 10:30:48.841596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.668 [2024-11-25 10:30:48.841722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:54.668 [2024-11-25 10:30:48.841833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.668 [2024-11-25 10:30:48.842636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.668 [2024-11-25 10:30:48.842806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.668 [2024-11-25 10:30:48.842940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:24:54.668 [2024-11-25 10:30:48.843022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.668 [2024-11-25 10:30:48.843330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.668 [2024-11-25 10:30:48.843482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.668 [2024-11-25 10:30:48.843620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:24:54.668 [2024-11-25 10:30:48.843652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.668 [2024-11-25 10:30:48.866497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.668 [2024-11-25 10:30:48.866742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.668 [2024-11-25 10:30:48.866915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.793 ms 00:24:54.668 [2024-11-25 10:30:48.866977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.668 [2024-11-25 10:30:48.882122] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:54.668 [2024-11-25 10:30:48.904084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.668 [2024-11-25 10:30:48.904427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:54.668 [2024-11-25 10:30:48.904573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.816 ms 00:24:54.668 [2024-11-25 10:30:48.904709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.927 [2024-11-25 10:30:49.017227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.927 [2024-11-25 10:30:49.017484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:54.927 [2024-11-25 10:30:49.017529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.394 ms 00:24:54.927 [2024-11-25 10:30:49.017544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.927 [2024-11-25 10:30:49.017875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.927 [2024-11-25 10:30:49.017901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:54.927 [2024-11-25 10:30:49.017935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:24:54.927 [2024-11-25 10:30:49.017948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.927 [2024-11-25 10:30:49.050045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.927 [2024-11-25 10:30:49.050106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:54.927 [2024-11-25 10:30:49.050130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.998 ms 00:24:54.927 [2024-11-25 10:30:49.050143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.927 [2024-11-25 10:30:49.085081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.927 [2024-11-25 10:30:49.085152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:54.927 [2024-11-25 10:30:49.085191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.857 ms 00:24:54.927 [2024-11-25 10:30:49.085213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.927 [2024-11-25 10:30:49.086209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.927 [2024-11-25 10:30:49.086238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:54.927 [2024-11-25 10:30:49.086263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:24:54.927 [2024-11-25 10:30:49.086276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.927 [2024-11-25 10:30:49.195510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.927 [2024-11-25 10:30:49.195719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:54.927 [2024-11-25 10:30:49.195797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.122 ms 00:24:54.927 [2024-11-25 10:30:49.195821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.927 [2024-11-25 10:30:49.230291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.927 [2024-11-25 10:30:49.230509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:54.927 [2024-11-25 10:30:49.230548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.331 ms 00:24:54.927 [2024-11-25 10:30:49.230564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.185 [2024-11-25 10:30:49.264052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.185 [2024-11-25 10:30:49.264256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:55.185 [2024-11-25 10:30:49.264387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.420 ms 00:24:55.185 [2024-11-25 10:30:49.264533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.185 [2024-11-25 10:30:49.296849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.185 [2024-11-25 10:30:49.297042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:55.185 [2024-11-25 10:30:49.297171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.224 ms 00:24:55.185 [2024-11-25 10:30:49.297222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.185 [2024-11-25 10:30:49.297360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.185 [2024-11-25 10:30:49.297420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:55.185 [2024-11-25 10:30:49.297553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:55.185 [2024-11-25 10:30:49.297608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.185 [2024-11-25 10:30:49.297970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.185 [2024-11-25 10:30:49.298110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:55.185 [2024-11-25 10:30:49.298227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:55.185 [2024-11-25 10:30:49.298279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.185 [2024-11-25 10:30:49.299889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4743.275 ms, result 0 00:24:55.185 { 00:24:55.185 "name": "ftl0", 00:24:55.185 "uuid": "a9a414f7-7948-4d2f-a864-7415d36041d6" 00:24:55.185 } 00:24:55.185 10:30:49 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:24:55.185 10:30:49 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:24:55.185 10:30:49 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:55.185 10:30:49 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:24:55.185 10:30:49 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:55.185 10:30:49 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:55.185 10:30:49 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:55.444 10:30:49 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:24:55.702 [ 00:24:55.702 { 00:24:55.702 "name": "ftl0", 00:24:55.702 "aliases": [ 00:24:55.702 "a9a414f7-7948-4d2f-a864-7415d36041d6" 00:24:55.702 ], 00:24:55.702 "product_name": "FTL disk", 00:24:55.702 "block_size": 4096, 00:24:55.702 "num_blocks": 20971520, 00:24:55.702 "uuid": "a9a414f7-7948-4d2f-a864-7415d36041d6", 00:24:55.702 "assigned_rate_limits": { 00:24:55.702 "rw_ios_per_sec": 0, 00:24:55.702 "rw_mbytes_per_sec": 0, 00:24:55.702 "r_mbytes_per_sec": 0, 00:24:55.702 "w_mbytes_per_sec": 0 00:24:55.702 }, 00:24:55.702 "claimed": false, 00:24:55.702 "zoned": false, 00:24:55.702 "supported_io_types": { 00:24:55.702 "read": true, 00:24:55.702 "write": true, 00:24:55.702 "unmap": true, 00:24:55.702 "flush": true, 00:24:55.702 "reset": false, 00:24:55.702 "nvme_admin": false, 00:24:55.702 "nvme_io": false, 00:24:55.702 "nvme_io_md": false, 00:24:55.702 "write_zeroes": true, 00:24:55.702 "zcopy": false, 00:24:55.702 "get_zone_info": false, 00:24:55.702 "zone_management": false, 00:24:55.702 "zone_append": false, 00:24:55.702 "compare": false, 00:24:55.702 "compare_and_write": false, 00:24:55.702 "abort": false, 00:24:55.702 "seek_hole": false, 00:24:55.702 "seek_data": false, 00:24:55.702 "copy": false, 00:24:55.702 "nvme_iov_md": false 00:24:55.702 }, 00:24:55.702 "driver_specific": { 00:24:55.702 "ftl": { 00:24:55.702 "base_bdev": "e43776dd-4131-4865-ab3d-c655d9519941", 00:24:55.702 "cache": "nvc0n1p0" 00:24:55.702 } 00:24:55.702 } 00:24:55.702 } 00:24:55.702 ] 00:24:55.702 10:30:50 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:24:55.702 10:30:50 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:24:55.702 10:30:50 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:56.269 10:30:50 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:24:56.269 10:30:50 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:56.528 [2024-11-25 10:30:50.717119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.717384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:56.528 [2024-11-25 10:30:50.717417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:56.528 [2024-11-25 10:30:50.717435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.528 [2024-11-25 10:30:50.717491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:56.528 [2024-11-25 10:30:50.721228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.721263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:56.528 [2024-11-25 10:30:50.721283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.704 ms 00:24:56.528 [2024-11-25 10:30:50.721296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.528 [2024-11-25 10:30:50.721869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.721899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:56.528 [2024-11-25 10:30:50.721918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:24:56.528 [2024-11-25 10:30:50.721932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.528 [2024-11-25 10:30:50.725153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.725187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:56.528 [2024-11-25 10:30:50.725206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.188 ms 00:24:56.528 [2024-11-25 10:30:50.725218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.528 [2024-11-25 10:30:50.731825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.731862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:56.528 [2024-11-25 10:30:50.731907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.570 ms 00:24:56.528 [2024-11-25 10:30:50.731921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.528 [2024-11-25 10:30:50.764468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.764532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:56.528 [2024-11-25 10:30:50.764557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.440 ms 00:24:56.528 [2024-11-25 10:30:50.764569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.528 [2024-11-25 10:30:50.785583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.785653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:56.528 [2024-11-25 10:30:50.785678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.915 ms 00:24:56.528 [2024-11-25 10:30:50.785694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.528 [2024-11-25 10:30:50.786030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.786057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:56.528 [2024-11-25 10:30:50.786075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:24:56.528 [2024-11-25 10:30:50.786087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.528 [2024-11-25 10:30:50.819187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.819415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:56.528 [2024-11-25 10:30:50.819453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.060 ms 00:24:56.528 [2024-11-25 10:30:50.819467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.528 [2024-11-25 10:30:50.850912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.528 [2024-11-25 10:30:50.850971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:56.528 [2024-11-25 10:30:50.851008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.363 ms 00:24:56.528 [2024-11-25 10:30:50.851020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.789 [2024-11-25 10:30:50.880968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.789 [2024-11-25 10:30:50.881024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:56.789 [2024-11-25 10:30:50.881047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.873 ms 00:24:56.789 [2024-11-25 10:30:50.881059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.789 [2024-11-25 10:30:50.912226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.789 [2024-11-25 10:30:50.912314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:56.789 [2024-11-25 10:30:50.912352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.998 ms 00:24:56.789 [2024-11-25 10:30:50.912365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.789 [2024-11-25 10:30:50.912441] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:56.789 [2024-11-25 10:30:50.912468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.912989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:56.789 [2024-11-25 10:30:50.913507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.913973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:56.790 [2024-11-25 10:30:50.914000] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:56.790 [2024-11-25 10:30:50.914026] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a9a414f7-7948-4d2f-a864-7415d36041d6 00:24:56.790 [2024-11-25 10:30:50.914039] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:56.790 [2024-11-25 10:30:50.914056] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:56.790 [2024-11-25 10:30:50.914067] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:56.790 [2024-11-25 10:30:50.914086] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:56.790 [2024-11-25 10:30:50.914097] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:56.790 [2024-11-25 10:30:50.914112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:56.790 [2024-11-25 10:30:50.914123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:56.790 [2024-11-25 10:30:50.914136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:56.790 [2024-11-25 10:30:50.914147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:56.790 [2024-11-25 10:30:50.914161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.790 [2024-11-25 10:30:50.914173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:56.790 [2024-11-25 10:30:50.914188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:24:56.790 [2024-11-25 10:30:50.914200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.790 [2024-11-25 10:30:50.932217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.790 [2024-11-25 10:30:50.932281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:56.790 [2024-11-25 10:30:50.932322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.918 ms 00:24:56.790 [2024-11-25 10:30:50.932345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.790 [2024-11-25 10:30:50.932871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.790 [2024-11-25 10:30:50.932895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:56.790 [2024-11-25 10:30:50.932913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:24:56.790 [2024-11-25 10:30:50.932925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.790 [2024-11-25 10:30:50.994460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.790 [2024-11-25 10:30:50.994739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:56.790 [2024-11-25 10:30:50.994798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.790 [2024-11-25 10:30:50.994817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.790 [2024-11-25 10:30:50.994922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.790 [2024-11-25 10:30:50.994939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:56.790 [2024-11-25 10:30:50.994955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.790 [2024-11-25 10:30:50.994967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.790 [2024-11-25 10:30:50.995149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.790 [2024-11-25 10:30:50.995170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:56.790 [2024-11-25 10:30:50.995190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.790 [2024-11-25 10:30:50.995202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.790 [2024-11-25 10:30:50.995247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.790 [2024-11-25 10:30:50.995263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:56.790 [2024-11-25 10:30:50.995277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.790 [2024-11-25 10:30:50.995289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.049 [2024-11-25 10:30:51.119365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.049 [2024-11-25 10:30:51.119465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:57.049 [2024-11-25 10:30:51.119492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.049 [2024-11-25 10:30:51.119506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.049 [2024-11-25 10:30:51.209282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.049 [2024-11-25 10:30:51.209358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.049 [2024-11-25 10:30:51.209385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.049 [2024-11-25 10:30:51.209398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.049 [2024-11-25 10:30:51.209558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.049 [2024-11-25 10:30:51.209579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.049 [2024-11-25 10:30:51.209616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.049 [2024-11-25 10:30:51.209631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.049 [2024-11-25 10:30:51.209737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.049 [2024-11-25 10:30:51.209755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.049 [2024-11-25 10:30:51.209795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.049 [2024-11-25 10:30:51.209813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.049 [2024-11-25 10:30:51.209981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.049 [2024-11-25 10:30:51.210002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.049 [2024-11-25 10:30:51.210018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.049 [2024-11-25 10:30:51.210030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.049 [2024-11-25 10:30:51.210124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.049 [2024-11-25 10:30:51.210143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:57.049 [2024-11-25 10:30:51.210158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.049 [2024-11-25 10:30:51.210180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.049 [2024-11-25 10:30:51.210246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.049 [2024-11-25 10:30:51.210262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.049 [2024-11-25 10:30:51.210277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.049 [2024-11-25 10:30:51.210289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.049 [2024-11-25 10:30:51.210400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:57.049 [2024-11-25 10:30:51.210419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.049 [2024-11-25 10:30:51.210435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:57.049 [2024-11-25 10:30:51.210446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.049 [2024-11-25 10:30:51.210657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.512 ms, result 0 00:24:57.049 true 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77038 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77038 ']' 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77038 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77038 00:24:57.049 killing process with pid 77038 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77038' 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77038 00:24:57.049 10:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77038 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:02.317 10:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:02.317 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:25:02.317 fio-3.35 00:25:02.317 Starting 1 thread 00:25:07.689 00:25:07.689 test: (groupid=0, jobs=1): err= 0: pid=77274: Mon Nov 25 10:31:01 2024 00:25:07.689 read: IOPS=893, BW=59.3MiB/s (62.2MB/s)(255MiB/4290msec) 00:25:07.689 slat (usec): min=5, max=107, avg=10.59, stdev= 4.44 00:25:07.689 clat (usec): min=307, max=995, avg=496.14, stdev=63.89 00:25:07.689 lat (usec): min=313, max=1017, avg=506.73, stdev=65.29 00:25:07.689 clat percentiles (usec): 00:25:07.689 | 1.00th=[ 375], 5.00th=[ 400], 10.00th=[ 424], 20.00th=[ 441], 00:25:07.689 | 30.00th=[ 453], 40.00th=[ 482], 50.00th=[ 502], 60.00th=[ 510], 00:25:07.689 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 603], 00:25:07.689 | 99.00th=[ 660], 99.50th=[ 701], 99.90th=[ 816], 99.95th=[ 963], 00:25:07.689 | 99.99th=[ 996] 00:25:07.689 write: IOPS=899, BW=59.7MiB/s (62.6MB/s)(256MiB/4286msec); 0 zone resets 00:25:07.689 slat (nsec): min=19666, max=92779, avg=28159.93, stdev=5906.41 00:25:07.689 clat (usec): min=391, max=1054, avg=565.68, stdev=69.92 00:25:07.689 lat (usec): min=415, max=1084, avg=593.84, stdev=70.64 00:25:07.689 clat percentiles (usec): 00:25:07.689 | 1.00th=[ 416], 5.00th=[ 465], 10.00th=[ 478], 20.00th=[ 515], 00:25:07.689 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 578], 00:25:07.689 | 70.00th=[ 603], 80.00th=[ 619], 90.00th=[ 644], 95.00th=[ 685], 00:25:07.689 | 99.00th=[ 775], 99.50th=[ 816], 99.90th=[ 955], 99.95th=[ 971], 00:25:07.689 | 99.99th=[ 1057] 00:25:07.689 bw ( KiB/s): min=59024, max=66368, per=99.40%, avg=60809.00, stdev=2519.74, samples=8 00:25:07.689 iops : min= 868, max= 976, avg=894.25, stdev=37.05, samples=8 00:25:07.689 lat (usec) : 500=32.85%, 750=66.20%, 1000=0.94% 00:25:07.689 lat (msec) : 2=0.01% 00:25:07.689 cpu : usr=99.14%, sys=0.12%, ctx=10, majf=0, minf=1169 00:25:07.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:07.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.689 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:07.689 00:25:07.689 Run status group 0 (all jobs): 00:25:07.689 READ: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=255MiB (267MB), run=4290-4290msec 00:25:07.689 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=256MiB (269MB), run=4286-4286msec 00:25:09.605 ----------------------------------------------------- 00:25:09.605 Suppressions used: 00:25:09.605 count bytes template 00:25:09.605 1 5 /usr/src/fio/parse.c 00:25:09.605 1 8 libtcmalloc_minimal.so 00:25:09.605 1 904 libcrypto.so 00:25:09.605 ----------------------------------------------------- 00:25:09.605 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:09.605 10:31:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:09.864 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:09.864 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:09.864 fio-3.35 00:25:09.864 Starting 2 threads 00:25:41.984 00:25:41.984 first_half: (groupid=0, jobs=1): err= 0: pid=77384: Mon Nov 25 10:31:34 2024 00:25:41.984 read: IOPS=2279, BW=9117KiB/s (9336kB/s)(256MiB/28726msec) 00:25:41.984 slat (nsec): min=4503, max=71601, avg=8001.75, stdev=2408.62 00:25:41.984 clat (usec): min=754, max=317899, avg=47481.35, stdev=30856.29 00:25:41.984 lat (usec): min=759, max=317907, avg=47489.35, stdev=30856.51 00:25:41.984 clat percentiles (msec): 00:25:41.984 | 1.00th=[ 13], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 38], 00:25:41.984 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 41], 00:25:41.984 | 70.00th=[ 45], 80.00th=[ 47], 90.00th=[ 54], 95.00th=[ 93], 00:25:41.984 | 99.00th=[ 211], 99.50th=[ 239], 99.90th=[ 279], 99.95th=[ 292], 00:25:41.984 | 99.99th=[ 313] 00:25:41.984 write: IOPS=2285, BW=9142KiB/s (9361kB/s)(256MiB/28675msec); 0 zone resets 00:25:41.984 slat (usec): min=5, max=629, avg= 9.10, stdev= 5.46 00:25:41.984 clat (usec): min=455, max=59292, avg=8629.18, stdev=8718.36 00:25:41.984 lat (usec): min=470, max=59300, avg=8638.27, stdev=8718.52 00:25:41.984 clat percentiles (usec): 00:25:41.984 | 1.00th=[ 1139], 5.00th=[ 1598], 10.00th=[ 2008], 20.00th=[ 3490], 00:25:41.984 | 30.00th=[ 4621], 40.00th=[ 5735], 50.00th=[ 6587], 60.00th=[ 7439], 00:25:41.984 | 70.00th=[ 8225], 80.00th=[10028], 90.00th=[15926], 95.00th=[24773], 00:25:41.984 | 99.00th=[47449], 99.50th=[50070], 99.90th=[55313], 99.95th=[56886], 00:25:41.984 | 99.99th=[58459] 00:25:41.984 bw ( KiB/s): min= 1576, max=48248, per=100.00%, avg=20029.54, stdev=14533.82, samples=26 00:25:41.984 iops : min= 394, max=12062, avg=5007.46, stdev=3633.54, samples=26 00:25:41.984 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.19% 00:25:41.984 lat (msec) : 2=4.75%, 4=7.17%, 10=27.96%, 20=8.24%, 50=43.99% 00:25:41.984 lat (msec) : 100=5.29%, 250=2.21%, 500=0.13% 00:25:41.984 cpu : usr=99.12%, sys=0.15%, ctx=38, majf=0, minf=5532 00:25:41.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:41.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.984 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:41.984 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:41.984 second_half: (groupid=0, jobs=1): err= 0: pid=77385: Mon Nov 25 10:31:34 2024 00:25:41.984 read: IOPS=2303, BW=9213KiB/s (9434kB/s)(256MiB/28433msec) 00:25:41.984 slat (nsec): min=4997, max=34718, avg=7806.90, stdev=1953.79 00:25:41.984 clat (msec): min=11, max=291, avg=47.70, stdev=26.96 00:25:41.984 lat (msec): min=11, max=291, avg=47.71, stdev=26.96 00:25:41.984 clat percentiles (msec): 00:25:41.984 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 38], 00:25:41.984 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 41], 00:25:41.984 | 70.00th=[ 45], 80.00th=[ 48], 90.00th=[ 56], 95.00th=[ 86], 00:25:41.984 | 99.00th=[ 190], 99.50th=[ 211], 99.90th=[ 251], 99.95th=[ 253], 00:25:41.984 | 99.99th=[ 288] 00:25:41.984 write: IOPS=2318, BW=9275KiB/s (9498kB/s)(256MiB/28262msec); 0 zone resets 00:25:41.984 slat (usec): min=5, max=998, avg= 8.98, stdev= 6.92 00:25:41.984 clat (usec): min=460, max=51145, avg=7843.92, stdev=4942.98 00:25:41.984 lat (usec): min=477, max=51171, avg=7852.90, stdev=4943.32 00:25:41.984 clat percentiles (usec): 00:25:41.984 | 1.00th=[ 1303], 5.00th=[ 2180], 10.00th=[ 3130], 20.00th=[ 4178], 00:25:41.984 | 30.00th=[ 5276], 40.00th=[ 5997], 50.00th=[ 6915], 60.00th=[ 7570], 00:25:41.984 | 70.00th=[ 8586], 80.00th=[10421], 90.00th=[14877], 95.00th=[16319], 00:25:41.984 | 99.00th=[26608], 99.50th=[32113], 99.90th=[46924], 99.95th=[49546], 00:25:41.984 | 99.99th=[50594] 00:25:41.984 bw ( KiB/s): min= 600, max=43304, per=100.00%, avg=21761.00, stdev=13439.14, samples=24 00:25:41.984 iops : min= 150, max=10826, avg=5440.25, stdev=3359.78, samples=24 00:25:41.984 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.17% 00:25:41.984 lat (msec) : 2=1.77%, 4=6.93%, 10=30.65%, 20=9.53%, 50=42.87% 00:25:41.984 lat (msec) : 100=5.90%, 250=2.08%, 500=0.05% 00:25:41.984 cpu : usr=99.11%, sys=0.14%, ctx=36, majf=0, minf=5581 00:25:41.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:41.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.984 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:41.984 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:41.984 00:25:41.984 Run status group 0 (all jobs): 00:25:41.984 READ: bw=17.8MiB/s (18.7MB/s), 9117KiB/s-9213KiB/s (9336kB/s-9434kB/s), io=512MiB (536MB), run=28433-28726msec 00:25:41.984 WRITE: bw=17.9MiB/s (18.7MB/s), 9142KiB/s-9275KiB/s (9361kB/s-9498kB/s), io=512MiB (537MB), run=28262-28675msec 00:25:42.918 ----------------------------------------------------- 00:25:42.918 Suppressions used: 00:25:42.918 count bytes template 00:25:42.918 2 10 /usr/src/fio/parse.c 00:25:42.918 3 288 /usr/src/fio/iolog.c 00:25:42.918 1 8 libtcmalloc_minimal.so 00:25:42.918 1 904 libcrypto.so 00:25:42.918 ----------------------------------------------------- 00:25:42.918 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:42.918 10:31:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:25:43.176 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:43.176 fio-3.35 00:25:43.176 Starting 1 thread 00:26:01.260 00:26:01.260 test: (groupid=0, jobs=1): err= 0: pid=77739: Mon Nov 25 10:31:55 2024 00:26:01.260 read: IOPS=6283, BW=24.5MiB/s (25.7MB/s)(255MiB/10376msec) 00:26:01.260 slat (usec): min=4, max=419, avg= 7.16, stdev= 3.78 00:26:01.260 clat (usec): min=776, max=38688, avg=20357.33, stdev=1490.41 00:26:01.260 lat (usec): min=781, max=38693, avg=20364.49, stdev=1490.51 00:26:01.260 clat percentiles (usec): 00:26:01.260 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19268], 20.00th=[19530], 00:26:01.260 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20055], 00:26:01.260 | 70.00th=[20579], 80.00th=[20841], 90.00th=[22152], 95.00th=[23200], 00:26:01.260 | 99.00th=[26084], 99.50th=[27132], 99.90th=[30278], 99.95th=[33817], 00:26:01.260 | 99.99th=[37487] 00:26:01.260 write: IOPS=10.8k, BW=42.3MiB/s (44.3MB/s)(256MiB/6054msec); 0 zone resets 00:26:01.260 slat (usec): min=6, max=2248, avg= 9.95, stdev=10.53 00:26:01.260 clat (usec): min=716, max=63195, avg=11760.40, stdev=14080.68 00:26:01.260 lat (usec): min=725, max=63204, avg=11770.35, stdev=14080.65 00:26:01.260 clat percentiles (usec): 00:26:01.260 | 1.00th=[ 996], 5.00th=[ 1188], 10.00th=[ 1319], 20.00th=[ 1532], 00:26:01.260 | 30.00th=[ 1745], 40.00th=[ 2311], 50.00th=[ 8291], 60.00th=[ 9634], 00:26:01.260 | 70.00th=[11338], 80.00th=[13435], 90.00th=[39584], 95.00th=[43779], 00:26:01.260 | 99.00th=[52167], 99.50th=[56361], 99.90th=[61080], 99.95th=[61604], 00:26:01.260 | 99.99th=[62653] 00:26:01.260 bw ( KiB/s): min= 3184, max=57136, per=93.12%, avg=40323.38, stdev=12488.08, samples=13 00:26:01.260 iops : min= 796, max=14284, avg=10080.85, stdev=3122.02, samples=13 00:26:01.260 lat (usec) : 750=0.01%, 1000=0.51% 00:26:01.260 lat (msec) : 2=17.98%, 4=2.47%, 10=10.10%, 20=37.57%, 50=30.69% 00:26:01.260 lat (msec) : 100=0.67% 00:26:01.260 cpu : usr=98.11%, sys=0.57%, ctx=68, majf=0, minf=5565 00:26:01.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:01.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.260 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:01.260 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:01.260 00:26:01.260 Run status group 0 (all jobs): 00:26:01.260 READ: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=255MiB (267MB), run=10376-10376msec 00:26:01.260 WRITE: bw=42.3MiB/s (44.3MB/s), 42.3MiB/s-42.3MiB/s (44.3MB/s-44.3MB/s), io=256MiB (268MB), run=6054-6054msec 00:26:03.161 ----------------------------------------------------- 00:26:03.161 Suppressions used: 00:26:03.161 count bytes template 00:26:03.161 1 5 /usr/src/fio/parse.c 00:26:03.161 2 192 /usr/src/fio/iolog.c 00:26:03.161 1 8 libtcmalloc_minimal.so 00:26:03.161 1 904 libcrypto.so 00:26:03.161 ----------------------------------------------------- 00:26:03.161 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:26:03.161 Remove shared memory files 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57881 /dev/shm/spdk_tgt_trace.pid75957 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:26:03.161 ************************************ 00:26:03.161 END TEST ftl_fio_basic 00:26:03.161 ************************************ 00:26:03.161 00:26:03.161 real 1m17.978s 00:26:03.161 user 2m53.289s 00:26:03.161 sys 0m4.359s 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:03.161 10:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:03.161 10:31:57 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:03.161 10:31:57 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:03.161 10:31:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:03.161 10:31:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:03.161 ************************************ 00:26:03.161 START TEST ftl_bdevperf 00:26:03.161 ************************************ 00:26:03.161 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:03.161 * Looking for test storage... 00:26:03.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:03.161 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:03.161 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:03.161 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:03.161 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:03.161 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:03.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.162 --rc genhtml_branch_coverage=1 00:26:03.162 --rc genhtml_function_coverage=1 00:26:03.162 --rc genhtml_legend=1 00:26:03.162 --rc geninfo_all_blocks=1 00:26:03.162 --rc geninfo_unexecuted_blocks=1 00:26:03.162 00:26:03.162 ' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:03.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.162 --rc genhtml_branch_coverage=1 00:26:03.162 --rc genhtml_function_coverage=1 00:26:03.162 --rc genhtml_legend=1 00:26:03.162 --rc geninfo_all_blocks=1 00:26:03.162 --rc geninfo_unexecuted_blocks=1 00:26:03.162 00:26:03.162 ' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:03.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.162 --rc genhtml_branch_coverage=1 00:26:03.162 --rc genhtml_function_coverage=1 00:26:03.162 --rc genhtml_legend=1 00:26:03.162 --rc geninfo_all_blocks=1 00:26:03.162 --rc geninfo_unexecuted_blocks=1 00:26:03.162 00:26:03.162 ' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:03.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.162 --rc genhtml_branch_coverage=1 00:26:03.162 --rc genhtml_function_coverage=1 00:26:03.162 --rc genhtml_legend=1 00:26:03.162 --rc geninfo_all_blocks=1 00:26:03.162 --rc geninfo_unexecuted_blocks=1 00:26:03.162 00:26:03.162 ' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:26:03.162 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78009 00:26:03.163 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:26:03.163 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:26:03.163 10:31:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78009 00:26:03.163 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78009 ']' 00:26:03.163 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.163 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.163 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.163 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.163 10:31:57 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:03.163 [2024-11-25 10:31:57.467699] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:26:03.163 [2024-11-25 10:31:57.468116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78009 ] 00:26:03.421 [2024-11-25 10:31:57.645483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.679 [2024-11-25 10:31:57.781080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.247 10:31:58 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.247 10:31:58 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:04.247 10:31:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:04.247 10:31:58 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:26:04.247 10:31:58 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:04.247 10:31:58 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:26:04.247 10:31:58 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:26:04.247 10:31:58 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:04.815 10:31:58 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:04.815 10:31:58 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:26:04.815 10:31:58 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:04.815 10:31:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:04.815 10:31:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:04.815 10:31:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:04.815 10:31:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:04.815 10:31:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:04.815 10:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:04.815 { 00:26:04.815 "name": "nvme0n1", 00:26:04.815 "aliases": [ 00:26:04.815 "605a64a1-aa49-4ecb-9a40-507f5bf90163" 00:26:04.815 ], 00:26:04.815 "product_name": "NVMe disk", 00:26:04.815 "block_size": 4096, 00:26:04.815 "num_blocks": 1310720, 00:26:04.815 "uuid": "605a64a1-aa49-4ecb-9a40-507f5bf90163", 00:26:04.815 "numa_id": -1, 00:26:04.815 "assigned_rate_limits": { 00:26:04.815 "rw_ios_per_sec": 0, 00:26:04.815 "rw_mbytes_per_sec": 0, 00:26:04.815 "r_mbytes_per_sec": 0, 00:26:04.815 "w_mbytes_per_sec": 0 00:26:04.815 }, 00:26:04.815 "claimed": true, 00:26:04.815 "claim_type": "read_many_write_one", 00:26:04.815 "zoned": false, 00:26:04.815 "supported_io_types": { 00:26:04.815 "read": true, 00:26:04.815 "write": true, 00:26:04.815 "unmap": true, 00:26:04.815 "flush": true, 00:26:04.815 "reset": true, 00:26:04.815 "nvme_admin": true, 00:26:04.815 "nvme_io": true, 00:26:04.815 "nvme_io_md": false, 00:26:04.815 "write_zeroes": true, 00:26:04.815 "zcopy": false, 00:26:04.815 "get_zone_info": false, 00:26:04.815 "zone_management": false, 00:26:04.815 "zone_append": false, 00:26:04.815 "compare": true, 00:26:04.815 "compare_and_write": false, 00:26:04.815 "abort": true, 00:26:04.815 "seek_hole": false, 00:26:04.815 "seek_data": false, 00:26:04.815 "copy": true, 00:26:04.815 "nvme_iov_md": false 00:26:04.815 }, 00:26:04.815 "driver_specific": { 00:26:04.815 "nvme": [ 00:26:04.815 { 00:26:04.815 "pci_address": "0000:00:11.0", 00:26:04.815 "trid": { 00:26:04.815 "trtype": "PCIe", 00:26:04.815 "traddr": "0000:00:11.0" 00:26:04.815 }, 00:26:04.815 "ctrlr_data": { 00:26:04.815 "cntlid": 0, 00:26:04.815 "vendor_id": "0x1b36", 00:26:04.815 "model_number": "QEMU NVMe Ctrl", 00:26:04.815 "serial_number": "12341", 00:26:04.815 "firmware_revision": "8.0.0", 00:26:04.815 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:04.815 "oacs": { 00:26:04.815 "security": 0, 00:26:04.815 "format": 1, 00:26:04.815 "firmware": 0, 00:26:04.815 "ns_manage": 1 00:26:04.815 }, 00:26:04.815 "multi_ctrlr": false, 00:26:04.815 "ana_reporting": false 00:26:04.815 }, 00:26:04.815 "vs": { 00:26:04.815 "nvme_version": "1.4" 00:26:04.815 }, 00:26:04.815 "ns_data": { 00:26:04.815 "id": 1, 00:26:04.815 "can_share": false 00:26:04.815 } 00:26:04.815 } 00:26:04.815 ], 00:26:04.815 "mp_policy": "active_passive" 00:26:04.815 } 00:26:04.815 } 00:26:04.815 ]' 00:26:04.815 10:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:04.815 10:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:04.815 10:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:05.076 10:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:05.076 10:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:05.076 10:31:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:26:05.076 10:31:59 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:26:05.076 10:31:59 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:05.076 10:31:59 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:26:05.076 10:31:59 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:05.076 10:31:59 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:05.336 10:31:59 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=cd19c53b-1b36-42ce-a5ea-2be9307bb617 00:26:05.336 10:31:59 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:26:05.336 10:31:59 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd19c53b-1b36-42ce-a5ea-2be9307bb617 00:26:05.595 10:31:59 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:05.854 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=570a4244-c515-4b4d-93d2-7cac8ca47d1b 00:26:05.854 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 570a4244-c515-4b4d-93d2-7cac8ca47d1b 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:06.113 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:06.679 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:06.679 { 00:26:06.679 "name": "89c50127-b6f7-4771-8f58-f1ec3c778e76", 00:26:06.679 "aliases": [ 00:26:06.679 "lvs/nvme0n1p0" 00:26:06.679 ], 00:26:06.679 "product_name": "Logical Volume", 00:26:06.679 "block_size": 4096, 00:26:06.679 "num_blocks": 26476544, 00:26:06.679 "uuid": "89c50127-b6f7-4771-8f58-f1ec3c778e76", 00:26:06.679 "assigned_rate_limits": { 00:26:06.679 "rw_ios_per_sec": 0, 00:26:06.679 "rw_mbytes_per_sec": 0, 00:26:06.679 "r_mbytes_per_sec": 0, 00:26:06.679 "w_mbytes_per_sec": 0 00:26:06.679 }, 00:26:06.679 "claimed": false, 00:26:06.680 "zoned": false, 00:26:06.680 "supported_io_types": { 00:26:06.680 "read": true, 00:26:06.680 "write": true, 00:26:06.680 "unmap": true, 00:26:06.680 "flush": false, 00:26:06.680 "reset": true, 00:26:06.680 "nvme_admin": false, 00:26:06.680 "nvme_io": false, 00:26:06.680 "nvme_io_md": false, 00:26:06.680 "write_zeroes": true, 00:26:06.680 "zcopy": false, 00:26:06.680 "get_zone_info": false, 00:26:06.680 "zone_management": false, 00:26:06.680 "zone_append": false, 00:26:06.680 "compare": false, 00:26:06.680 "compare_and_write": false, 00:26:06.680 "abort": false, 00:26:06.680 "seek_hole": true, 00:26:06.680 "seek_data": true, 00:26:06.680 "copy": false, 00:26:06.680 "nvme_iov_md": false 00:26:06.680 }, 00:26:06.680 "driver_specific": { 00:26:06.680 "lvol": { 00:26:06.680 "lvol_store_uuid": "570a4244-c515-4b4d-93d2-7cac8ca47d1b", 00:26:06.680 "base_bdev": "nvme0n1", 00:26:06.680 "thin_provision": true, 00:26:06.680 "num_allocated_clusters": 0, 00:26:06.680 "snapshot": false, 00:26:06.680 "clone": false, 00:26:06.680 "esnap_clone": false 00:26:06.680 } 00:26:06.680 } 00:26:06.680 } 00:26:06.680 ]' 00:26:06.680 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:06.680 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:06.680 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:06.680 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:06.680 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:06.680 10:32:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:06.680 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:26:06.680 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:26:06.680 10:32:00 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:06.938 10:32:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:06.938 10:32:01 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:06.938 10:32:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:06.938 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:06.938 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:06.938 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:06.938 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:06.938 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:07.197 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:07.197 { 00:26:07.197 "name": "89c50127-b6f7-4771-8f58-f1ec3c778e76", 00:26:07.197 "aliases": [ 00:26:07.197 "lvs/nvme0n1p0" 00:26:07.197 ], 00:26:07.197 "product_name": "Logical Volume", 00:26:07.197 "block_size": 4096, 00:26:07.197 "num_blocks": 26476544, 00:26:07.197 "uuid": "89c50127-b6f7-4771-8f58-f1ec3c778e76", 00:26:07.197 "assigned_rate_limits": { 00:26:07.197 "rw_ios_per_sec": 0, 00:26:07.197 "rw_mbytes_per_sec": 0, 00:26:07.197 "r_mbytes_per_sec": 0, 00:26:07.197 "w_mbytes_per_sec": 0 00:26:07.197 }, 00:26:07.197 "claimed": false, 00:26:07.197 "zoned": false, 00:26:07.197 "supported_io_types": { 00:26:07.197 "read": true, 00:26:07.197 "write": true, 00:26:07.197 "unmap": true, 00:26:07.197 "flush": false, 00:26:07.197 "reset": true, 00:26:07.197 "nvme_admin": false, 00:26:07.197 "nvme_io": false, 00:26:07.197 "nvme_io_md": false, 00:26:07.197 "write_zeroes": true, 00:26:07.197 "zcopy": false, 00:26:07.197 "get_zone_info": false, 00:26:07.197 "zone_management": false, 00:26:07.197 "zone_append": false, 00:26:07.197 "compare": false, 00:26:07.197 "compare_and_write": false, 00:26:07.197 "abort": false, 00:26:07.197 "seek_hole": true, 00:26:07.197 "seek_data": true, 00:26:07.197 "copy": false, 00:26:07.197 "nvme_iov_md": false 00:26:07.197 }, 00:26:07.197 "driver_specific": { 00:26:07.197 "lvol": { 00:26:07.197 "lvol_store_uuid": "570a4244-c515-4b4d-93d2-7cac8ca47d1b", 00:26:07.197 "base_bdev": "nvme0n1", 00:26:07.197 "thin_provision": true, 00:26:07.197 "num_allocated_clusters": 0, 00:26:07.197 "snapshot": false, 00:26:07.197 "clone": false, 00:26:07.197 "esnap_clone": false 00:26:07.197 } 00:26:07.197 } 00:26:07.197 } 00:26:07.197 ]' 00:26:07.197 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:07.197 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:07.197 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:07.197 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:07.197 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:07.198 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:07.198 10:32:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:26:07.198 10:32:01 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:07.767 10:32:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:26:07.767 10:32:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:07.767 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:07.767 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:07.767 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:07.767 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:07.767 10:32:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89c50127-b6f7-4771-8f58-f1ec3c778e76 00:26:08.026 10:32:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:08.026 { 00:26:08.026 "name": "89c50127-b6f7-4771-8f58-f1ec3c778e76", 00:26:08.026 "aliases": [ 00:26:08.026 "lvs/nvme0n1p0" 00:26:08.026 ], 00:26:08.026 "product_name": "Logical Volume", 00:26:08.026 "block_size": 4096, 00:26:08.026 "num_blocks": 26476544, 00:26:08.026 "uuid": "89c50127-b6f7-4771-8f58-f1ec3c778e76", 00:26:08.026 "assigned_rate_limits": { 00:26:08.026 "rw_ios_per_sec": 0, 00:26:08.026 "rw_mbytes_per_sec": 0, 00:26:08.026 "r_mbytes_per_sec": 0, 00:26:08.026 "w_mbytes_per_sec": 0 00:26:08.026 }, 00:26:08.026 "claimed": false, 00:26:08.026 "zoned": false, 00:26:08.026 "supported_io_types": { 00:26:08.026 "read": true, 00:26:08.026 "write": true, 00:26:08.026 "unmap": true, 00:26:08.026 "flush": false, 00:26:08.026 "reset": true, 00:26:08.026 "nvme_admin": false, 00:26:08.026 "nvme_io": false, 00:26:08.026 "nvme_io_md": false, 00:26:08.026 "write_zeroes": true, 00:26:08.026 "zcopy": false, 00:26:08.026 "get_zone_info": false, 00:26:08.026 "zone_management": false, 00:26:08.026 "zone_append": false, 00:26:08.026 "compare": false, 00:26:08.026 "compare_and_write": false, 00:26:08.026 "abort": false, 00:26:08.026 "seek_hole": true, 00:26:08.026 "seek_data": true, 00:26:08.026 "copy": false, 00:26:08.026 "nvme_iov_md": false 00:26:08.026 }, 00:26:08.026 "driver_specific": { 00:26:08.026 "lvol": { 00:26:08.026 "lvol_store_uuid": "570a4244-c515-4b4d-93d2-7cac8ca47d1b", 00:26:08.026 "base_bdev": "nvme0n1", 00:26:08.026 "thin_provision": true, 00:26:08.026 "num_allocated_clusters": 0, 00:26:08.026 "snapshot": false, 00:26:08.026 "clone": false, 00:26:08.026 "esnap_clone": false 00:26:08.026 } 00:26:08.026 } 00:26:08.026 } 00:26:08.026 ]' 00:26:08.026 10:32:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:08.026 10:32:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:08.026 10:32:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:08.026 10:32:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:08.026 10:32:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:08.026 10:32:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:08.026 10:32:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:26:08.026 10:32:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 89c50127-b6f7-4771-8f58-f1ec3c778e76 -c nvc0n1p0 --l2p_dram_limit 20 00:26:08.286 [2024-11-25 10:32:02.461437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.286 [2024-11-25 10:32:02.461515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:08.286 [2024-11-25 10:32:02.461539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:08.286 [2024-11-25 10:32:02.461555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.461636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.461665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:08.287 [2024-11-25 10:32:02.461678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:08.287 [2024-11-25 10:32:02.461693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.461720] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:08.287 [2024-11-25 10:32:02.462812] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:08.287 [2024-11-25 10:32:02.462848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.462866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:08.287 [2024-11-25 10:32:02.462880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:26:08.287 [2024-11-25 10:32:02.462894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.463046] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ade792e2-fe9d-4d3c-a4c3-d2207dba283d 00:26:08.287 [2024-11-25 10:32:02.464929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.464975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:08.287 [2024-11-25 10:32:02.464996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:08.287 [2024-11-25 10:32:02.465012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.475197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.475249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:08.287 [2024-11-25 10:32:02.475288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.129 ms 00:26:08.287 [2024-11-25 10:32:02.475317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.475466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.475486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:08.287 [2024-11-25 10:32:02.475508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:26:08.287 [2024-11-25 10:32:02.475520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.475611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.475631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:08.287 [2024-11-25 10:32:02.475647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:08.287 [2024-11-25 10:32:02.475659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.475695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:08.287 [2024-11-25 10:32:02.481105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.481150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:08.287 [2024-11-25 10:32:02.481185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.424 ms 00:26:08.287 [2024-11-25 10:32:02.481202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.481264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.481284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:08.287 [2024-11-25 10:32:02.481297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:08.287 [2024-11-25 10:32:02.481311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.481356] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:08.287 [2024-11-25 10:32:02.481523] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:08.287 [2024-11-25 10:32:02.481542] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:08.287 [2024-11-25 10:32:02.481561] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:08.287 [2024-11-25 10:32:02.481577] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:08.287 [2024-11-25 10:32:02.481593] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:08.287 [2024-11-25 10:32:02.481606] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:08.287 [2024-11-25 10:32:02.481620] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:08.287 [2024-11-25 10:32:02.481631] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:08.287 [2024-11-25 10:32:02.481645] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:08.287 [2024-11-25 10:32:02.481658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.481676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:08.287 [2024-11-25 10:32:02.481689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:26:08.287 [2024-11-25 10:32:02.481713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.481833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.287 [2024-11-25 10:32:02.481860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:08.287 [2024-11-25 10:32:02.481874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:26:08.287 [2024-11-25 10:32:02.481890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.287 [2024-11-25 10:32:02.482003] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:08.287 [2024-11-25 10:32:02.482035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:08.287 [2024-11-25 10:32:02.482054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:08.287 [2024-11-25 10:32:02.482069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.287 [2024-11-25 10:32:02.482081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:08.287 [2024-11-25 10:32:02.482094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:08.287 [2024-11-25 10:32:02.482106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:08.287 [2024-11-25 10:32:02.482120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:08.287 [2024-11-25 10:32:02.482131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:08.287 [2024-11-25 10:32:02.482144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:08.287 [2024-11-25 10:32:02.482154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:08.287 [2024-11-25 10:32:02.482174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:08.287 [2024-11-25 10:32:02.482195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:08.287 [2024-11-25 10:32:02.482227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:08.287 [2024-11-25 10:32:02.482239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:08.287 [2024-11-25 10:32:02.482255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.287 [2024-11-25 10:32:02.482271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:08.287 [2024-11-25 10:32:02.482297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:08.287 [2024-11-25 10:32:02.482316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.287 [2024-11-25 10:32:02.482355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:08.287 [2024-11-25 10:32:02.482370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:08.287 [2024-11-25 10:32:02.482383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:08.287 [2024-11-25 10:32:02.482395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:08.287 [2024-11-25 10:32:02.482409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:08.287 [2024-11-25 10:32:02.482420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:08.287 [2024-11-25 10:32:02.482433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:08.287 [2024-11-25 10:32:02.482444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:08.287 [2024-11-25 10:32:02.482462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:08.287 [2024-11-25 10:32:02.482483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:08.287 [2024-11-25 10:32:02.482511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:08.288 [2024-11-25 10:32:02.482531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:08.288 [2024-11-25 10:32:02.482548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:08.288 [2024-11-25 10:32:02.482560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:08.288 [2024-11-25 10:32:02.482574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:08.288 [2024-11-25 10:32:02.482585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:08.288 [2024-11-25 10:32:02.482599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:08.288 [2024-11-25 10:32:02.482609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:08.288 [2024-11-25 10:32:02.482623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:08.288 [2024-11-25 10:32:02.482633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:08.288 [2024-11-25 10:32:02.482646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.288 [2024-11-25 10:32:02.482659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:08.288 [2024-11-25 10:32:02.482682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:08.288 [2024-11-25 10:32:02.482702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.288 [2024-11-25 10:32:02.482717] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:08.288 [2024-11-25 10:32:02.482730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:08.288 [2024-11-25 10:32:02.482754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:08.288 [2024-11-25 10:32:02.482788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.288 [2024-11-25 10:32:02.482812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:08.288 [2024-11-25 10:32:02.482825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:08.288 [2024-11-25 10:32:02.482839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:08.288 [2024-11-25 10:32:02.482851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:08.288 [2024-11-25 10:32:02.482874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:08.288 [2024-11-25 10:32:02.482885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:08.288 [2024-11-25 10:32:02.482904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:08.288 [2024-11-25 10:32:02.482919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:08.288 [2024-11-25 10:32:02.482939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:08.288 [2024-11-25 10:32:02.482962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:08.288 [2024-11-25 10:32:02.482989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:08.288 [2024-11-25 10:32:02.483013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:08.288 [2024-11-25 10:32:02.483030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:08.288 [2024-11-25 10:32:02.483042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:08.288 [2024-11-25 10:32:02.483056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:08.288 [2024-11-25 10:32:02.483069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:08.288 [2024-11-25 10:32:02.483085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:08.288 [2024-11-25 10:32:02.483097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:08.288 [2024-11-25 10:32:02.483113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:08.288 [2024-11-25 10:32:02.483133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:08.288 [2024-11-25 10:32:02.483160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:08.288 [2024-11-25 10:32:02.483181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:08.288 [2024-11-25 10:32:02.483197] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:08.288 [2024-11-25 10:32:02.483210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:08.288 [2024-11-25 10:32:02.483228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:08.288 [2024-11-25 10:32:02.483240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:08.288 [2024-11-25 10:32:02.483254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:08.288 [2024-11-25 10:32:02.483266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:08.288 [2024-11-25 10:32:02.483287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.288 [2024-11-25 10:32:02.483314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:08.288 [2024-11-25 10:32:02.483343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:26:08.288 [2024-11-25 10:32:02.483359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.288 [2024-11-25 10:32:02.483416] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:08.288 [2024-11-25 10:32:02.483613] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:10.819 [2024-11-25 10:32:04.901413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:04.901494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:10.819 [2024-11-25 10:32:04.901526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2417.998 ms 00:26:10.819 [2024-11-25 10:32:04.901540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.819 [2024-11-25 10:32:04.940658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:04.940733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:10.819 [2024-11-25 10:32:04.940761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.804 ms 00:26:10.819 [2024-11-25 10:32:04.940794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.819 [2024-11-25 10:32:04.940995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:04.941016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:10.819 [2024-11-25 10:32:04.941036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:10.819 [2024-11-25 10:32:04.941048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.819 [2024-11-25 10:32:04.995256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:04.995329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.819 [2024-11-25 10:32:04.995359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.112 ms 00:26:10.819 [2024-11-25 10:32:04.995373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.819 [2024-11-25 10:32:04.995443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:04.995465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.819 [2024-11-25 10:32:04.995485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:10.819 [2024-11-25 10:32:04.995497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.819 [2024-11-25 10:32:04.996204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:04.996241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.819 [2024-11-25 10:32:04.996262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:26:10.819 [2024-11-25 10:32:04.996275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.819 [2024-11-25 10:32:04.996444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:04.996463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.819 [2024-11-25 10:32:04.996481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:26:10.819 [2024-11-25 10:32:04.996493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.819 [2024-11-25 10:32:05.016183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:05.016240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.819 [2024-11-25 10:32:05.016265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.663 ms 00:26:10.819 [2024-11-25 10:32:05.016278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.819 [2024-11-25 10:32:05.030931] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:26:10.819 [2024-11-25 10:32:05.038805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:05.038855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:10.819 [2024-11-25 10:32:05.038887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.401 ms 00:26:10.819 [2024-11-25 10:32:05.038903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.819 [2024-11-25 10:32:05.104721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.819 [2024-11-25 10:32:05.104827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:10.820 [2024-11-25 10:32:05.104853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.757 ms 00:26:10.820 [2024-11-25 10:32:05.104869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.820 [2024-11-25 10:32:05.105151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.820 [2024-11-25 10:32:05.105190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:10.820 [2024-11-25 10:32:05.105206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:26:10.820 [2024-11-25 10:32:05.105222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.820 [2024-11-25 10:32:05.137153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.820 [2024-11-25 10:32:05.137235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:10.820 [2024-11-25 10:32:05.137258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.815 ms 00:26:10.820 [2024-11-25 10:32:05.137275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.078 [2024-11-25 10:32:05.168099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.078 [2024-11-25 10:32:05.168167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:11.078 [2024-11-25 10:32:05.168191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.759 ms 00:26:11.078 [2024-11-25 10:32:05.168206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.078 [2024-11-25 10:32:05.169129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.078 [2024-11-25 10:32:05.169173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:11.078 [2024-11-25 10:32:05.169191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:26:11.078 [2024-11-25 10:32:05.169207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.078 [2024-11-25 10:32:05.253961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.078 [2024-11-25 10:32:05.254245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:11.078 [2024-11-25 10:32:05.254282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.666 ms 00:26:11.078 [2024-11-25 10:32:05.254301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.078 [2024-11-25 10:32:05.289088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.078 [2024-11-25 10:32:05.289403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:11.078 [2024-11-25 10:32:05.289439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.601 ms 00:26:11.078 [2024-11-25 10:32:05.289461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.078 [2024-11-25 10:32:05.323317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.078 [2024-11-25 10:32:05.323419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:11.078 [2024-11-25 10:32:05.323443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.776 ms 00:26:11.078 [2024-11-25 10:32:05.323458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.078 [2024-11-25 10:32:05.354473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.078 [2024-11-25 10:32:05.354544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:11.078 [2024-11-25 10:32:05.354567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.949 ms 00:26:11.078 [2024-11-25 10:32:05.354582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.078 [2024-11-25 10:32:05.354645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.078 [2024-11-25 10:32:05.354673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:11.078 [2024-11-25 10:32:05.354688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:11.078 [2024-11-25 10:32:05.354703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.078 [2024-11-25 10:32:05.354874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.078 [2024-11-25 10:32:05.354906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:11.078 [2024-11-25 10:32:05.354920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:11.078 [2024-11-25 10:32:05.354934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.078 [2024-11-25 10:32:05.356224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2894.244 ms, result 0 00:26:11.078 { 00:26:11.078 "name": "ftl0", 00:26:11.078 "uuid": "ade792e2-fe9d-4d3c-a4c3-d2207dba283d" 00:26:11.078 } 00:26:11.078 10:32:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:26:11.078 10:32:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:26:11.078 10:32:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:26:11.645 10:32:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:26:11.645 [2024-11-25 10:32:05.812670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:11.645 I/O size of 69632 is greater than zero copy threshold (65536). 00:26:11.645 Zero copy mechanism will not be used. 00:26:11.645 Running I/O for 4 seconds... 00:26:13.515 1630.00 IOPS, 108.24 MiB/s [2024-11-25T10:32:09.224Z] 1783.50 IOPS, 118.44 MiB/s [2024-11-25T10:32:10.159Z] 1769.00 IOPS, 117.47 MiB/s [2024-11-25T10:32:10.159Z] 1781.25 IOPS, 118.29 MiB/s 00:26:15.826 Latency(us) 00:26:15.826 [2024-11-25T10:32:10.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.826 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:26:15.826 ftl0 : 4.00 1780.38 118.23 0.00 0.00 586.58 238.31 2710.81 00:26:15.826 [2024-11-25T10:32:10.159Z] =================================================================================================================== 00:26:15.826 [2024-11-25T10:32:10.159Z] Total : 1780.38 118.23 0.00 0.00 586.58 238.31 2710.81 00:26:15.826 [2024-11-25 10:32:09.825967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:26:15.826 { 00:26:15.826 "results": [ 00:26:15.826 { 00:26:15.826 "job": "ftl0", 00:26:15.826 "core_mask": "0x1", 00:26:15.826 "workload": "randwrite", 00:26:15.826 "status": "finished", 00:26:15.826 "queue_depth": 1, 00:26:15.826 "io_size": 69632, 00:26:15.826 "runtime": 4.002507, 00:26:15.826 "iops": 1780.384144237599, 00:26:15.826 "mibps": 118.22863457827806, 00:26:15.826 "io_failed": 0, 00:26:15.826 "io_timeout": 0, 00:26:15.826 "avg_latency_us": 586.5794565356058, 00:26:15.826 "min_latency_us": 238.31272727272727, 00:26:15.826 "max_latency_us": 2710.807272727273 00:26:15.826 } 00:26:15.826 ], 00:26:15.826 "core_count": 1 00:26:15.826 } 00:26:15.826 10:32:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:26:15.826 [2024-11-25 10:32:09.970434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:15.826 Running I/O for 4 seconds... 00:26:17.696 7729.00 IOPS, 30.19 MiB/s [2024-11-25T10:32:13.403Z] 7124.00 IOPS, 27.83 MiB/s [2024-11-25T10:32:14.338Z] 7090.33 IOPS, 27.70 MiB/s [2024-11-25T10:32:14.338Z] 7174.00 IOPS, 28.02 MiB/s 00:26:20.005 Latency(us) 00:26:20.005 [2024-11-25T10:32:14.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.005 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:26:20.005 ftl0 : 4.02 7162.66 27.98 0.00 0.00 17816.39 325.82 40751.48 00:26:20.005 [2024-11-25T10:32:14.338Z] =================================================================================================================== 00:26:20.005 [2024-11-25T10:32:14.338Z] Total : 7162.66 27.98 0.00 0.00 17816.39 0.00 40751.48 00:26:20.005 [2024-11-25 10:32:14.007006] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:26:20.005 { 00:26:20.005 "results": [ 00:26:20.005 { 00:26:20.005 "job": "ftl0", 00:26:20.005 "core_mask": "0x1", 00:26:20.005 "workload": "randwrite", 00:26:20.005 "status": "finished", 00:26:20.005 "queue_depth": 128, 00:26:20.005 "io_size": 4096, 00:26:20.005 "runtime": 4.024201, 00:26:20.005 "iops": 7162.664091579919, 00:26:20.005 "mibps": 27.97915660773406, 00:26:20.005 "io_failed": 0, 00:26:20.005 "io_timeout": 0, 00:26:20.005 "avg_latency_us": 17816.389372492617, 00:26:20.005 "min_latency_us": 325.8181818181818, 00:26:20.005 "max_latency_us": 40751.476363636364 00:26:20.005 } 00:26:20.005 ], 00:26:20.005 "core_count": 1 00:26:20.005 } 00:26:20.005 10:32:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:26:20.005 [2024-11-25 10:32:14.158817] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:20.005 Running I/O for 4 seconds... 00:26:21.874 6000.00 IOPS, 23.44 MiB/s [2024-11-25T10:32:17.586Z] 5660.50 IOPS, 22.11 MiB/s [2024-11-25T10:32:18.521Z] 5743.00 IOPS, 22.43 MiB/s [2024-11-25T10:32:18.521Z] 5790.50 IOPS, 22.62 MiB/s 00:26:24.188 Latency(us) 00:26:24.188 [2024-11-25T10:32:18.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.188 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:24.188 Verification LBA range: start 0x0 length 0x1400000 00:26:24.188 ftl0 : 4.01 5802.07 22.66 0.00 0.00 21981.92 374.23 32172.22 00:26:24.188 [2024-11-25T10:32:18.521Z] =================================================================================================================== 00:26:24.188 [2024-11-25T10:32:18.521Z] Total : 5802.07 22.66 0.00 0.00 21981.92 0.00 32172.22 00:26:24.188 [2024-11-25 10:32:18.193794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:26:24.188 { 00:26:24.188 "results": [ 00:26:24.188 { 00:26:24.188 "job": "ftl0", 00:26:24.188 "core_mask": "0x1", 00:26:24.188 "workload": "verify", 00:26:24.188 "status": "finished", 00:26:24.188 "verify_range": { 00:26:24.188 "start": 0, 00:26:24.188 "length": 20971520 00:26:24.188 }, 00:26:24.188 "queue_depth": 128, 00:26:24.188 "io_size": 4096, 00:26:24.188 "runtime": 4.013914, 00:26:24.188 "iops": 5802.06750817282, 00:26:24.188 "mibps": 22.66432620380008, 00:26:24.188 "io_failed": 0, 00:26:24.188 "io_timeout": 0, 00:26:24.188 "avg_latency_us": 21981.917671003477, 00:26:24.188 "min_latency_us": 374.22545454545457, 00:26:24.188 "max_latency_us": 32172.21818181818 00:26:24.188 } 00:26:24.188 ], 00:26:24.188 "core_count": 1 00:26:24.188 } 00:26:24.188 10:32:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:26:24.188 [2024-11-25 10:32:18.479302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.188 [2024-11-25 10:32:18.479375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:24.188 [2024-11-25 10:32:18.479401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:24.188 [2024-11-25 10:32:18.479417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.188 [2024-11-25 10:32:18.479450] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:24.188 [2024-11-25 10:32:18.483123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.188 [2024-11-25 10:32:18.483158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:24.188 [2024-11-25 10:32:18.483178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:26:24.188 [2024-11-25 10:32:18.483190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.188 [2024-11-25 10:32:18.484877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.188 [2024-11-25 10:32:18.484922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:24.188 [2024-11-25 10:32:18.484943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:26:24.188 [2024-11-25 10:32:18.484956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.447 [2024-11-25 10:32:18.664167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.447 [2024-11-25 10:32:18.664253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:24.447 [2024-11-25 10:32:18.664285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 179.162 ms 00:26:24.447 [2024-11-25 10:32:18.664300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.447 [2024-11-25 10:32:18.670904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.447 [2024-11-25 10:32:18.670945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:24.447 [2024-11-25 10:32:18.670965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.549 ms 00:26:24.447 [2024-11-25 10:32:18.670977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.447 [2024-11-25 10:32:18.702376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.447 [2024-11-25 10:32:18.702420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:24.447 [2024-11-25 10:32:18.702442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.307 ms 00:26:24.447 [2024-11-25 10:32:18.702455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.447 [2024-11-25 10:32:18.721100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.447 [2024-11-25 10:32:18.721147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:24.447 [2024-11-25 10:32:18.721174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.590 ms 00:26:24.447 [2024-11-25 10:32:18.721186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.447 [2024-11-25 10:32:18.721365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.447 [2024-11-25 10:32:18.721388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:24.447 [2024-11-25 10:32:18.721408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:26:24.447 [2024-11-25 10:32:18.721422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.447 [2024-11-25 10:32:18.752127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.447 [2024-11-25 10:32:18.752169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:24.447 [2024-11-25 10:32:18.752189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.677 ms 00:26:24.447 [2024-11-25 10:32:18.752202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.706 [2024-11-25 10:32:18.782552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.706 [2024-11-25 10:32:18.782595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:24.706 [2024-11-25 10:32:18.782616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.296 ms 00:26:24.706 [2024-11-25 10:32:18.782628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.706 [2024-11-25 10:32:18.813055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.706 [2024-11-25 10:32:18.813099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:24.706 [2024-11-25 10:32:18.813120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.376 ms 00:26:24.707 [2024-11-25 10:32:18.813132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.707 [2024-11-25 10:32:18.843345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.707 [2024-11-25 10:32:18.843416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:24.707 [2024-11-25 10:32:18.843443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.098 ms 00:26:24.707 [2024-11-25 10:32:18.843456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.707 [2024-11-25 10:32:18.843510] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:24.707 [2024-11-25 10:32:18.843536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.843991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:24.707 [2024-11-25 10:32:18.844709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:24.708 [2024-11-25 10:32:18.844979] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:24.708 [2024-11-25 10:32:18.844993] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ade792e2-fe9d-4d3c-a4c3-d2207dba283d 00:26:24.708 [2024-11-25 10:32:18.845006] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:24.708 [2024-11-25 10:32:18.845019] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:24.708 [2024-11-25 10:32:18.845034] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:24.708 [2024-11-25 10:32:18.845048] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:24.708 [2024-11-25 10:32:18.845059] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:24.708 [2024-11-25 10:32:18.845073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:24.708 [2024-11-25 10:32:18.845084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:24.708 [2024-11-25 10:32:18.845099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:24.708 [2024-11-25 10:32:18.845110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:24.708 [2024-11-25 10:32:18.845124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.708 [2024-11-25 10:32:18.845136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:24.708 [2024-11-25 10:32:18.845151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:26:24.708 [2024-11-25 10:32:18.845163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.708 [2024-11-25 10:32:18.862257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.708 [2024-11-25 10:32:18.862301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:24.708 [2024-11-25 10:32:18.862322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.029 ms 00:26:24.708 [2024-11-25 10:32:18.862334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.708 [2024-11-25 10:32:18.862852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.708 [2024-11-25 10:32:18.862892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:24.708 [2024-11-25 10:32:18.862911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:26:24.708 [2024-11-25 10:32:18.862923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.708 [2024-11-25 10:32:18.910805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.708 [2024-11-25 10:32:18.910859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:24.708 [2024-11-25 10:32:18.910883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.708 [2024-11-25 10:32:18.910896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.708 [2024-11-25 10:32:18.910972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.708 [2024-11-25 10:32:18.910988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:24.708 [2024-11-25 10:32:18.911003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.708 [2024-11-25 10:32:18.911015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.708 [2024-11-25 10:32:18.911152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.708 [2024-11-25 10:32:18.911177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:24.708 [2024-11-25 10:32:18.911193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.708 [2024-11-25 10:32:18.911205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.708 [2024-11-25 10:32:18.911233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.708 [2024-11-25 10:32:18.911248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:24.708 [2024-11-25 10:32:18.911263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.708 [2024-11-25 10:32:18.911274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.708 [2024-11-25 10:32:19.021904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.708 [2024-11-25 10:32:19.021981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:24.708 [2024-11-25 10:32:19.022008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.708 [2024-11-25 10:32:19.022026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.967 [2024-11-25 10:32:19.112411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.967 [2024-11-25 10:32:19.112495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:24.967 [2024-11-25 10:32:19.112520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.967 [2024-11-25 10:32:19.112534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.967 [2024-11-25 10:32:19.112726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.967 [2024-11-25 10:32:19.112750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:24.967 [2024-11-25 10:32:19.112796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.967 [2024-11-25 10:32:19.112813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.967 [2024-11-25 10:32:19.112903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.967 [2024-11-25 10:32:19.112922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:24.967 [2024-11-25 10:32:19.112939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.967 [2024-11-25 10:32:19.112951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.967 [2024-11-25 10:32:19.113095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.967 [2024-11-25 10:32:19.113127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:24.967 [2024-11-25 10:32:19.113152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.967 [2024-11-25 10:32:19.113164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.967 [2024-11-25 10:32:19.113222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.967 [2024-11-25 10:32:19.113240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:24.967 [2024-11-25 10:32:19.113256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.967 [2024-11-25 10:32:19.113268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.967 [2024-11-25 10:32:19.113321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.967 [2024-11-25 10:32:19.113339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:24.967 [2024-11-25 10:32:19.113354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.967 [2024-11-25 10:32:19.113369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.967 [2024-11-25 10:32:19.113431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.967 [2024-11-25 10:32:19.113466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:24.967 [2024-11-25 10:32:19.113482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.967 [2024-11-25 10:32:19.113495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.967 [2024-11-25 10:32:19.113662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 634.308 ms, result 0 00:26:24.967 true 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78009 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78009 ']' 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78009 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78009 00:26:24.967 killing process with pid 78009 00:26:24.967 Received shutdown signal, test time was about 4.000000 seconds 00:26:24.967 00:26:24.967 Latency(us) 00:26:24.967 [2024-11-25T10:32:19.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.967 [2024-11-25T10:32:19.300Z] =================================================================================================================== 00:26:24.967 [2024-11-25T10:32:19.300Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78009' 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78009 00:26:24.967 10:32:19 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78009 00:26:29.148 10:32:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:29.148 10:32:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:26:29.148 Remove shared memory files 00:26:29.149 10:32:22 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:29.149 10:32:22 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:26:29.149 10:32:22 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:26:29.149 10:32:22 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:26:29.149 10:32:22 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:29.149 10:32:22 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:26:29.149 00:26:29.149 real 0m25.516s 00:26:29.149 user 0m29.349s 00:26:29.149 sys 0m1.328s 00:26:29.149 10:32:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.149 10:32:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.149 ************************************ 00:26:29.149 END TEST ftl_bdevperf 00:26:29.149 ************************************ 00:26:29.149 10:32:22 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:26:29.149 10:32:22 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:29.149 10:32:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.149 10:32:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:29.149 ************************************ 00:26:29.149 START TEST ftl_trim 00:26:29.149 ************************************ 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:26:29.149 * Looking for test storage... 00:26:29.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.149 10:32:22 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:29.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.149 --rc genhtml_branch_coverage=1 00:26:29.149 --rc genhtml_function_coverage=1 00:26:29.149 --rc genhtml_legend=1 00:26:29.149 --rc geninfo_all_blocks=1 00:26:29.149 --rc geninfo_unexecuted_blocks=1 00:26:29.149 00:26:29.149 ' 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:29.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.149 --rc genhtml_branch_coverage=1 00:26:29.149 --rc genhtml_function_coverage=1 00:26:29.149 --rc genhtml_legend=1 00:26:29.149 --rc geninfo_all_blocks=1 00:26:29.149 --rc geninfo_unexecuted_blocks=1 00:26:29.149 00:26:29.149 ' 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:29.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.149 --rc genhtml_branch_coverage=1 00:26:29.149 --rc genhtml_function_coverage=1 00:26:29.149 --rc genhtml_legend=1 00:26:29.149 --rc geninfo_all_blocks=1 00:26:29.149 --rc geninfo_unexecuted_blocks=1 00:26:29.149 00:26:29.149 ' 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:29.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.149 --rc genhtml_branch_coverage=1 00:26:29.149 --rc genhtml_function_coverage=1 00:26:29.149 --rc genhtml_legend=1 00:26:29.149 --rc geninfo_all_blocks=1 00:26:29.149 --rc geninfo_unexecuted_blocks=1 00:26:29.149 00:26:29.149 ' 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78367 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78367 00:26:29.149 10:32:22 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78367 ']' 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.149 10:32:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:29.149 [2024-11-25 10:32:23.078076] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:26:29.149 [2024-11-25 10:32:23.078245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78367 ] 00:26:29.149 [2024-11-25 10:32:23.321819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:29.408 [2024-11-25 10:32:23.484097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.408 [2024-11-25 10:32:23.484195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.408 [2024-11-25 10:32:23.484196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.343 10:32:24 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.343 10:32:24 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:26:30.343 10:32:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:30.343 10:32:24 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:26:30.343 10:32:24 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:30.343 10:32:24 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:26:30.343 10:32:24 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:26:30.343 10:32:24 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:30.602 10:32:24 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:30.602 10:32:24 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:26:30.602 10:32:24 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:30.602 10:32:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:30.602 10:32:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:30.602 10:32:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:26:30.602 10:32:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:26:30.602 10:32:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:30.861 10:32:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:30.861 { 00:26:30.861 "name": "nvme0n1", 00:26:30.861 "aliases": [ 00:26:30.861 "552a3705-cc47-4ff8-ad9d-fcd3a776026f" 00:26:30.861 ], 00:26:30.861 "product_name": "NVMe disk", 00:26:30.861 "block_size": 4096, 00:26:30.861 "num_blocks": 1310720, 00:26:30.861 "uuid": "552a3705-cc47-4ff8-ad9d-fcd3a776026f", 00:26:30.861 "numa_id": -1, 00:26:30.861 "assigned_rate_limits": { 00:26:30.861 "rw_ios_per_sec": 0, 00:26:30.861 "rw_mbytes_per_sec": 0, 00:26:30.861 "r_mbytes_per_sec": 0, 00:26:30.861 "w_mbytes_per_sec": 0 00:26:30.861 }, 00:26:30.861 "claimed": true, 00:26:30.861 "claim_type": "read_many_write_one", 00:26:30.861 "zoned": false, 00:26:30.861 "supported_io_types": { 00:26:30.861 "read": true, 00:26:30.861 "write": true, 00:26:30.861 "unmap": true, 00:26:30.861 "flush": true, 00:26:30.861 "reset": true, 00:26:30.861 "nvme_admin": true, 00:26:30.861 "nvme_io": true, 00:26:30.861 "nvme_io_md": false, 00:26:30.861 "write_zeroes": true, 00:26:30.861 "zcopy": false, 00:26:30.861 "get_zone_info": false, 00:26:30.861 "zone_management": false, 00:26:30.861 "zone_append": false, 00:26:30.861 "compare": true, 00:26:30.861 "compare_and_write": false, 00:26:30.861 "abort": true, 00:26:30.861 "seek_hole": false, 00:26:30.861 "seek_data": false, 00:26:30.861 "copy": true, 00:26:30.861 "nvme_iov_md": false 00:26:30.861 }, 00:26:30.861 "driver_specific": { 00:26:30.861 "nvme": [ 00:26:30.861 { 00:26:30.861 "pci_address": "0000:00:11.0", 00:26:30.861 "trid": { 00:26:30.861 "trtype": "PCIe", 00:26:30.861 "traddr": "0000:00:11.0" 00:26:30.861 }, 00:26:30.861 "ctrlr_data": { 00:26:30.861 "cntlid": 0, 00:26:30.861 "vendor_id": "0x1b36", 00:26:30.861 "model_number": "QEMU NVMe Ctrl", 00:26:30.861 "serial_number": "12341", 00:26:30.861 "firmware_revision": "8.0.0", 00:26:30.861 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:30.861 "oacs": { 00:26:30.861 "security": 0, 00:26:30.861 "format": 1, 00:26:30.861 "firmware": 0, 00:26:30.861 "ns_manage": 1 00:26:30.861 }, 00:26:30.861 "multi_ctrlr": false, 00:26:30.861 "ana_reporting": false 00:26:30.861 }, 00:26:30.861 "vs": { 00:26:30.861 "nvme_version": "1.4" 00:26:30.861 }, 00:26:30.861 "ns_data": { 00:26:30.861 "id": 1, 00:26:30.861 "can_share": false 00:26:30.861 } 00:26:30.861 } 00:26:30.861 ], 00:26:30.861 "mp_policy": "active_passive" 00:26:30.861 } 00:26:30.861 } 00:26:30.861 ]' 00:26:30.861 10:32:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:30.861 10:32:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:26:30.861 10:32:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:30.861 10:32:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:30.861 10:32:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:30.862 10:32:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:26:30.862 10:32:25 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:26:30.862 10:32:25 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:30.862 10:32:25 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:26:30.862 10:32:25 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:30.862 10:32:25 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:31.428 10:32:25 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=570a4244-c515-4b4d-93d2-7cac8ca47d1b 00:26:31.428 10:32:25 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:26:31.428 10:32:25 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 570a4244-c515-4b4d-93d2-7cac8ca47d1b 00:26:31.428 10:32:25 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:31.995 10:32:26 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=3bdcbaea-9339-403f-9758-8d4613c774a8 00:26:31.995 10:32:26 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3bdcbaea-9339-403f-9758-8d4613c774a8 00:26:32.254 10:32:26 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:32.254 10:32:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:32.254 10:32:26 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:26:32.254 10:32:26 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:32.254 10:32:26 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:32.254 10:32:26 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:26:32.254 10:32:26 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:32.254 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:32.254 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:32.254 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:26:32.254 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:26:32.254 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:32.513 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:32.513 { 00:26:32.513 "name": "dab81f36-7920-4821-a65f-457c7d5e50b0", 00:26:32.513 "aliases": [ 00:26:32.513 "lvs/nvme0n1p0" 00:26:32.513 ], 00:26:32.513 "product_name": "Logical Volume", 00:26:32.513 "block_size": 4096, 00:26:32.513 "num_blocks": 26476544, 00:26:32.513 "uuid": "dab81f36-7920-4821-a65f-457c7d5e50b0", 00:26:32.513 "assigned_rate_limits": { 00:26:32.513 "rw_ios_per_sec": 0, 00:26:32.513 "rw_mbytes_per_sec": 0, 00:26:32.513 "r_mbytes_per_sec": 0, 00:26:32.513 "w_mbytes_per_sec": 0 00:26:32.513 }, 00:26:32.513 "claimed": false, 00:26:32.513 "zoned": false, 00:26:32.513 "supported_io_types": { 00:26:32.513 "read": true, 00:26:32.513 "write": true, 00:26:32.513 "unmap": true, 00:26:32.513 "flush": false, 00:26:32.513 "reset": true, 00:26:32.513 "nvme_admin": false, 00:26:32.513 "nvme_io": false, 00:26:32.513 "nvme_io_md": false, 00:26:32.513 "write_zeroes": true, 00:26:32.513 "zcopy": false, 00:26:32.513 "get_zone_info": false, 00:26:32.513 "zone_management": false, 00:26:32.513 "zone_append": false, 00:26:32.513 "compare": false, 00:26:32.513 "compare_and_write": false, 00:26:32.513 "abort": false, 00:26:32.513 "seek_hole": true, 00:26:32.513 "seek_data": true, 00:26:32.513 "copy": false, 00:26:32.513 "nvme_iov_md": false 00:26:32.513 }, 00:26:32.513 "driver_specific": { 00:26:32.513 "lvol": { 00:26:32.513 "lvol_store_uuid": "3bdcbaea-9339-403f-9758-8d4613c774a8", 00:26:32.513 "base_bdev": "nvme0n1", 00:26:32.513 "thin_provision": true, 00:26:32.513 "num_allocated_clusters": 0, 00:26:32.513 "snapshot": false, 00:26:32.513 "clone": false, 00:26:32.513 "esnap_clone": false 00:26:32.513 } 00:26:32.513 } 00:26:32.513 } 00:26:32.513 ]' 00:26:32.513 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:32.513 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:26:32.513 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:32.513 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:32.513 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:32.513 10:32:26 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:26:32.513 10:32:26 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:26:32.513 10:32:26 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:26:32.513 10:32:26 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:33.081 10:32:27 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:33.081 10:32:27 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:33.081 10:32:27 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:33.081 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:33.081 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:33.081 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:26:33.081 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:26:33.081 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:33.081 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:33.081 { 00:26:33.081 "name": "dab81f36-7920-4821-a65f-457c7d5e50b0", 00:26:33.081 "aliases": [ 00:26:33.081 "lvs/nvme0n1p0" 00:26:33.081 ], 00:26:33.081 "product_name": "Logical Volume", 00:26:33.081 "block_size": 4096, 00:26:33.081 "num_blocks": 26476544, 00:26:33.081 "uuid": "dab81f36-7920-4821-a65f-457c7d5e50b0", 00:26:33.081 "assigned_rate_limits": { 00:26:33.081 "rw_ios_per_sec": 0, 00:26:33.081 "rw_mbytes_per_sec": 0, 00:26:33.081 "r_mbytes_per_sec": 0, 00:26:33.081 "w_mbytes_per_sec": 0 00:26:33.081 }, 00:26:33.081 "claimed": false, 00:26:33.081 "zoned": false, 00:26:33.081 "supported_io_types": { 00:26:33.081 "read": true, 00:26:33.081 "write": true, 00:26:33.081 "unmap": true, 00:26:33.081 "flush": false, 00:26:33.081 "reset": true, 00:26:33.081 "nvme_admin": false, 00:26:33.081 "nvme_io": false, 00:26:33.081 "nvme_io_md": false, 00:26:33.081 "write_zeroes": true, 00:26:33.081 "zcopy": false, 00:26:33.081 "get_zone_info": false, 00:26:33.081 "zone_management": false, 00:26:33.081 "zone_append": false, 00:26:33.081 "compare": false, 00:26:33.081 "compare_and_write": false, 00:26:33.081 "abort": false, 00:26:33.081 "seek_hole": true, 00:26:33.081 "seek_data": true, 00:26:33.081 "copy": false, 00:26:33.081 "nvme_iov_md": false 00:26:33.081 }, 00:26:33.081 "driver_specific": { 00:26:33.081 "lvol": { 00:26:33.081 "lvol_store_uuid": "3bdcbaea-9339-403f-9758-8d4613c774a8", 00:26:33.081 "base_bdev": "nvme0n1", 00:26:33.081 "thin_provision": true, 00:26:33.081 "num_allocated_clusters": 0, 00:26:33.081 "snapshot": false, 00:26:33.081 "clone": false, 00:26:33.081 "esnap_clone": false 00:26:33.081 } 00:26:33.081 } 00:26:33.081 } 00:26:33.081 ]' 00:26:33.081 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:33.340 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:26:33.340 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:33.340 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:33.340 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:33.340 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:26:33.340 10:32:27 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:26:33.340 10:32:27 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:33.598 10:32:27 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:26:33.598 10:32:27 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:26:33.598 10:32:27 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:33.598 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:33.598 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:33.598 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:26:33.598 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:26:33.598 10:32:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dab81f36-7920-4821-a65f-457c7d5e50b0 00:26:33.856 10:32:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:33.856 { 00:26:33.856 "name": "dab81f36-7920-4821-a65f-457c7d5e50b0", 00:26:33.856 "aliases": [ 00:26:33.856 "lvs/nvme0n1p0" 00:26:33.856 ], 00:26:33.856 "product_name": "Logical Volume", 00:26:33.856 "block_size": 4096, 00:26:33.856 "num_blocks": 26476544, 00:26:33.856 "uuid": "dab81f36-7920-4821-a65f-457c7d5e50b0", 00:26:33.856 "assigned_rate_limits": { 00:26:33.856 "rw_ios_per_sec": 0, 00:26:33.856 "rw_mbytes_per_sec": 0, 00:26:33.856 "r_mbytes_per_sec": 0, 00:26:33.856 "w_mbytes_per_sec": 0 00:26:33.856 }, 00:26:33.856 "claimed": false, 00:26:33.856 "zoned": false, 00:26:33.856 "supported_io_types": { 00:26:33.856 "read": true, 00:26:33.856 "write": true, 00:26:33.856 "unmap": true, 00:26:33.856 "flush": false, 00:26:33.856 "reset": true, 00:26:33.856 "nvme_admin": false, 00:26:33.856 "nvme_io": false, 00:26:33.856 "nvme_io_md": false, 00:26:33.856 "write_zeroes": true, 00:26:33.856 "zcopy": false, 00:26:33.856 "get_zone_info": false, 00:26:33.856 "zone_management": false, 00:26:33.856 "zone_append": false, 00:26:33.856 "compare": false, 00:26:33.856 "compare_and_write": false, 00:26:33.856 "abort": false, 00:26:33.856 "seek_hole": true, 00:26:33.856 "seek_data": true, 00:26:33.856 "copy": false, 00:26:33.856 "nvme_iov_md": false 00:26:33.856 }, 00:26:33.856 "driver_specific": { 00:26:33.856 "lvol": { 00:26:33.856 "lvol_store_uuid": "3bdcbaea-9339-403f-9758-8d4613c774a8", 00:26:33.856 "base_bdev": "nvme0n1", 00:26:33.856 "thin_provision": true, 00:26:33.856 "num_allocated_clusters": 0, 00:26:33.856 "snapshot": false, 00:26:33.856 "clone": false, 00:26:33.856 "esnap_clone": false 00:26:33.856 } 00:26:33.856 } 00:26:33.856 } 00:26:33.856 ]' 00:26:33.856 10:32:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:33.856 10:32:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:26:33.856 10:32:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:34.114 10:32:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:34.114 10:32:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:34.114 10:32:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:26:34.114 10:32:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:26:34.114 10:32:28 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dab81f36-7920-4821-a65f-457c7d5e50b0 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:26:34.373 [2024-11-25 10:32:28.481561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.373 [2024-11-25 10:32:28.482130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:34.373 [2024-11-25 10:32:28.482175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:34.373 [2024-11-25 10:32:28.482191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.373 [2024-11-25 10:32:28.485933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.373 [2024-11-25 10:32:28.485981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:34.373 [2024-11-25 10:32:28.486003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:26:34.373 [2024-11-25 10:32:28.486015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.373 [2024-11-25 10:32:28.486166] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:34.373 [2024-11-25 10:32:28.487223] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:34.373 [2024-11-25 10:32:28.487406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.373 [2024-11-25 10:32:28.487427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:34.373 [2024-11-25 10:32:28.487443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:26:34.373 [2024-11-25 10:32:28.487456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.373 [2024-11-25 10:32:28.487730] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 320fbe64-4a13-4fdf-8f16-2944badb2627 00:26:34.373 [2024-11-25 10:32:28.489567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.373 [2024-11-25 10:32:28.489614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:34.373 [2024-11-25 10:32:28.489632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:34.373 [2024-11-25 10:32:28.489647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.373 [2024-11-25 10:32:28.499320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.373 [2024-11-25 10:32:28.499382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:34.373 [2024-11-25 10:32:28.499409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.580 ms 00:26:34.374 [2024-11-25 10:32:28.499424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.374 [2024-11-25 10:32:28.499626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.374 [2024-11-25 10:32:28.499653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:34.374 [2024-11-25 10:32:28.499669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:34.374 [2024-11-25 10:32:28.499688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.374 [2024-11-25 10:32:28.499734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.374 [2024-11-25 10:32:28.499754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:34.374 [2024-11-25 10:32:28.499767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:34.374 [2024-11-25 10:32:28.499807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.374 [2024-11-25 10:32:28.499857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:34.374 [2024-11-25 10:32:28.505157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.374 [2024-11-25 10:32:28.505200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:34.374 [2024-11-25 10:32:28.505227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.306 ms 00:26:34.374 [2024-11-25 10:32:28.505241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.374 [2024-11-25 10:32:28.505324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.374 [2024-11-25 10:32:28.505344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:34.374 [2024-11-25 10:32:28.505360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:34.374 [2024-11-25 10:32:28.505394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.374 [2024-11-25 10:32:28.505438] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:34.374 [2024-11-25 10:32:28.505604] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:34.374 [2024-11-25 10:32:28.505628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:34.374 [2024-11-25 10:32:28.505644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:34.374 [2024-11-25 10:32:28.505662] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:34.374 [2024-11-25 10:32:28.505676] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:34.374 [2024-11-25 10:32:28.505691] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:34.374 [2024-11-25 10:32:28.505702] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:34.374 [2024-11-25 10:32:28.505716] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:34.374 [2024-11-25 10:32:28.505730] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:34.374 [2024-11-25 10:32:28.505744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.374 [2024-11-25 10:32:28.505756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:34.374 [2024-11-25 10:32:28.505785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:26:34.374 [2024-11-25 10:32:28.505801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.374 [2024-11-25 10:32:28.505915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.374 [2024-11-25 10:32:28.505930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:34.374 [2024-11-25 10:32:28.505945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:34.374 [2024-11-25 10:32:28.505956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.374 [2024-11-25 10:32:28.506100] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:34.374 [2024-11-25 10:32:28.506117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:34.374 [2024-11-25 10:32:28.506132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:34.374 [2024-11-25 10:32:28.506145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:34.374 [2024-11-25 10:32:28.506170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:34.374 [2024-11-25 10:32:28.506195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:34.374 [2024-11-25 10:32:28.506217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:34.374 [2024-11-25 10:32:28.506247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:34.374 [2024-11-25 10:32:28.506261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:34.374 [2024-11-25 10:32:28.506277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:34.374 [2024-11-25 10:32:28.506290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:34.374 [2024-11-25 10:32:28.506307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:34.374 [2024-11-25 10:32:28.506319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:34.374 [2024-11-25 10:32:28.506373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:34.374 [2024-11-25 10:32:28.506402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:34.374 [2024-11-25 10:32:28.506450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:34.374 [2024-11-25 10:32:28.506493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:34.374 [2024-11-25 10:32:28.506511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:34.374 [2024-11-25 10:32:28.506536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:34.374 [2024-11-25 10:32:28.506549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:34.374 [2024-11-25 10:32:28.506573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:34.374 [2024-11-25 10:32:28.506584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:34.374 [2024-11-25 10:32:28.506609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:34.374 [2024-11-25 10:32:28.506625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:34.374 [2024-11-25 10:32:28.506650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:34.374 [2024-11-25 10:32:28.506661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:34.374 [2024-11-25 10:32:28.506674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:34.374 [2024-11-25 10:32:28.506685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:34.374 [2024-11-25 10:32:28.506698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:34.374 [2024-11-25 10:32:28.506709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:34.374 [2024-11-25 10:32:28.506733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:34.374 [2024-11-25 10:32:28.506746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.374 [2024-11-25 10:32:28.506757] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:34.374 [2024-11-25 10:32:28.507002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:34.374 [2024-11-25 10:32:28.507066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:34.374 [2024-11-25 10:32:28.507308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.374 [2024-11-25 10:32:28.507433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:34.374 [2024-11-25 10:32:28.507493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:34.374 [2024-11-25 10:32:28.507627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:34.374 [2024-11-25 10:32:28.507682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:34.374 [2024-11-25 10:32:28.507842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:34.374 [2024-11-25 10:32:28.507965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:34.374 [2024-11-25 10:32:28.508022] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:34.374 [2024-11-25 10:32:28.508182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:34.374 [2024-11-25 10:32:28.508312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:34.374 [2024-11-25 10:32:28.508442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:34.374 [2024-11-25 10:32:28.508567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:34.374 [2024-11-25 10:32:28.508716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:34.374 [2024-11-25 10:32:28.508867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:34.374 [2024-11-25 10:32:28.508944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:34.374 [2024-11-25 10:32:28.509015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:34.374 [2024-11-25 10:32:28.509117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:34.374 [2024-11-25 10:32:28.509144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:34.374 [2024-11-25 10:32:28.509162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:34.374 [2024-11-25 10:32:28.509175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:34.375 [2024-11-25 10:32:28.509189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:34.375 [2024-11-25 10:32:28.509200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:34.375 [2024-11-25 10:32:28.509214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:34.375 [2024-11-25 10:32:28.509226] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:34.375 [2024-11-25 10:32:28.509259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:34.375 [2024-11-25 10:32:28.509272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:34.375 [2024-11-25 10:32:28.509286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:34.375 [2024-11-25 10:32:28.509298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:34.375 [2024-11-25 10:32:28.509312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:34.375 [2024-11-25 10:32:28.509326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.375 [2024-11-25 10:32:28.509341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:34.375 [2024-11-25 10:32:28.509354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:26:34.375 [2024-11-25 10:32:28.509368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.375 [2024-11-25 10:32:28.509516] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:34.375 [2024-11-25 10:32:28.509546] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:36.904 [2024-11-25 10:32:30.896558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.904 [2024-11-25 10:32:30.896644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:36.904 [2024-11-25 10:32:30.896669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2387.052 ms 00:26:36.904 [2024-11-25 10:32:30.896686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.904 [2024-11-25 10:32:30.936826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.904 [2024-11-25 10:32:30.936912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:36.904 [2024-11-25 10:32:30.936936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.723 ms 00:26:36.904 [2024-11-25 10:32:30.936952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.904 [2024-11-25 10:32:30.937164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.904 [2024-11-25 10:32:30.937198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:36.904 [2024-11-25 10:32:30.937213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:36.904 [2024-11-25 10:32:30.937231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.904 [2024-11-25 10:32:30.995488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.904 [2024-11-25 10:32:30.995762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:36.904 [2024-11-25 10:32:30.995808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.165 ms 00:26:36.904 [2024-11-25 10:32:30.995829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.904 [2024-11-25 10:32:30.995993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.904 [2024-11-25 10:32:30.996019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:36.904 [2024-11-25 10:32:30.996034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:36.904 [2024-11-25 10:32:30.996049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.904 [2024-11-25 10:32:30.996654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.904 [2024-11-25 10:32:30.996681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:36.904 [2024-11-25 10:32:30.996695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:26:36.904 [2024-11-25 10:32:30.996710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.904 [2024-11-25 10:32:30.996921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.904 [2024-11-25 10:32:30.996944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:36.904 [2024-11-25 10:32:30.996958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:26:36.905 [2024-11-25 10:32:30.996975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.905 [2024-11-25 10:32:31.019517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.905 [2024-11-25 10:32:31.019588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:36.905 [2024-11-25 10:32:31.019610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.436 ms 00:26:36.905 [2024-11-25 10:32:31.019629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.905 [2024-11-25 10:32:31.034873] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:36.905 [2024-11-25 10:32:31.058004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.905 [2024-11-25 10:32:31.058089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:36.905 [2024-11-25 10:32:31.058115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.123 ms 00:26:36.905 [2024-11-25 10:32:31.058127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.905 [2024-11-25 10:32:31.128363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.905 [2024-11-25 10:32:31.128447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:36.905 [2024-11-25 10:32:31.128474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.046 ms 00:26:36.905 [2024-11-25 10:32:31.128487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.905 [2024-11-25 10:32:31.128843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.905 [2024-11-25 10:32:31.128878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:36.905 [2024-11-25 10:32:31.128902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:26:36.905 [2024-11-25 10:32:31.128914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.905 [2024-11-25 10:32:31.160466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.905 [2024-11-25 10:32:31.160513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:36.905 [2024-11-25 10:32:31.160536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.489 ms 00:26:36.905 [2024-11-25 10:32:31.160549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.905 [2024-11-25 10:32:31.194538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.905 [2024-11-25 10:32:31.194712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:36.905 [2024-11-25 10:32:31.194749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.867 ms 00:26:36.905 [2024-11-25 10:32:31.194768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.905 [2024-11-25 10:32:31.195709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.905 [2024-11-25 10:32:31.195747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:36.905 [2024-11-25 10:32:31.195767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:26:36.905 [2024-11-25 10:32:31.195799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.163 [2024-11-25 10:32:31.286824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.163 [2024-11-25 10:32:31.286894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:37.163 [2024-11-25 10:32:31.286948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.951 ms 00:26:37.163 [2024-11-25 10:32:31.286961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.163 [2024-11-25 10:32:31.322237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.163 [2024-11-25 10:32:31.322309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:37.163 [2024-11-25 10:32:31.322377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.117 ms 00:26:37.163 [2024-11-25 10:32:31.322400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.163 [2024-11-25 10:32:31.354854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.163 [2024-11-25 10:32:31.354946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:37.163 [2024-11-25 10:32:31.354987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.287 ms 00:26:37.163 [2024-11-25 10:32:31.355011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.163 [2024-11-25 10:32:31.389231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.163 [2024-11-25 10:32:31.389535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:37.163 [2024-11-25 10:32:31.389576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.041 ms 00:26:37.163 [2024-11-25 10:32:31.389613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.163 [2024-11-25 10:32:31.389831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.163 [2024-11-25 10:32:31.389859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:37.163 [2024-11-25 10:32:31.389892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:37.163 [2024-11-25 10:32:31.389904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.163 [2024-11-25 10:32:31.390047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.163 [2024-11-25 10:32:31.390064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:37.163 [2024-11-25 10:32:31.390080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:37.163 [2024-11-25 10:32:31.390092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.163 [2024-11-25 10:32:31.391488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:37.163 [2024-11-25 10:32:31.396499] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2909.555 ms, result 0 00:26:37.163 [2024-11-25 10:32:31.397665] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:37.163 { 00:26:37.163 "name": "ftl0", 00:26:37.163 "uuid": "320fbe64-4a13-4fdf-8f16-2944badb2627" 00:26:37.163 } 00:26:37.163 10:32:31 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:26:37.163 10:32:31 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:26:37.163 10:32:31 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:37.163 10:32:31 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:26:37.163 10:32:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:37.163 10:32:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:37.163 10:32:31 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:37.422 10:32:31 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:26:37.681 [ 00:26:37.681 { 00:26:37.681 "name": "ftl0", 00:26:37.681 "aliases": [ 00:26:37.681 "320fbe64-4a13-4fdf-8f16-2944badb2627" 00:26:37.681 ], 00:26:37.681 "product_name": "FTL disk", 00:26:37.681 "block_size": 4096, 00:26:37.681 "num_blocks": 23592960, 00:26:37.681 "uuid": "320fbe64-4a13-4fdf-8f16-2944badb2627", 00:26:37.681 "assigned_rate_limits": { 00:26:37.681 "rw_ios_per_sec": 0, 00:26:37.681 "rw_mbytes_per_sec": 0, 00:26:37.681 "r_mbytes_per_sec": 0, 00:26:37.681 "w_mbytes_per_sec": 0 00:26:37.681 }, 00:26:37.681 "claimed": false, 00:26:37.681 "zoned": false, 00:26:37.681 "supported_io_types": { 00:26:37.681 "read": true, 00:26:37.681 "write": true, 00:26:37.681 "unmap": true, 00:26:37.681 "flush": true, 00:26:37.681 "reset": false, 00:26:37.681 "nvme_admin": false, 00:26:37.681 "nvme_io": false, 00:26:37.681 "nvme_io_md": false, 00:26:37.681 "write_zeroes": true, 00:26:37.681 "zcopy": false, 00:26:37.681 "get_zone_info": false, 00:26:37.681 "zone_management": false, 00:26:37.681 "zone_append": false, 00:26:37.681 "compare": false, 00:26:37.681 "compare_and_write": false, 00:26:37.681 "abort": false, 00:26:37.681 "seek_hole": false, 00:26:37.681 "seek_data": false, 00:26:37.681 "copy": false, 00:26:37.681 "nvme_iov_md": false 00:26:37.681 }, 00:26:37.681 "driver_specific": { 00:26:37.681 "ftl": { 00:26:37.681 "base_bdev": "dab81f36-7920-4821-a65f-457c7d5e50b0", 00:26:37.681 "cache": "nvc0n1p0" 00:26:37.681 } 00:26:37.681 } 00:26:37.681 } 00:26:37.681 ] 00:26:37.681 10:32:32 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:26:37.681 10:32:32 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:26:37.681 10:32:32 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:38.287 10:32:32 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:26:38.287 10:32:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:26:38.545 10:32:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:26:38.545 { 00:26:38.545 "name": "ftl0", 00:26:38.545 "aliases": [ 00:26:38.545 "320fbe64-4a13-4fdf-8f16-2944badb2627" 00:26:38.545 ], 00:26:38.545 "product_name": "FTL disk", 00:26:38.545 "block_size": 4096, 00:26:38.545 "num_blocks": 23592960, 00:26:38.545 "uuid": "320fbe64-4a13-4fdf-8f16-2944badb2627", 00:26:38.545 "assigned_rate_limits": { 00:26:38.545 "rw_ios_per_sec": 0, 00:26:38.545 "rw_mbytes_per_sec": 0, 00:26:38.545 "r_mbytes_per_sec": 0, 00:26:38.545 "w_mbytes_per_sec": 0 00:26:38.545 }, 00:26:38.545 "claimed": false, 00:26:38.545 "zoned": false, 00:26:38.545 "supported_io_types": { 00:26:38.545 "read": true, 00:26:38.545 "write": true, 00:26:38.545 "unmap": true, 00:26:38.545 "flush": true, 00:26:38.545 "reset": false, 00:26:38.545 "nvme_admin": false, 00:26:38.545 "nvme_io": false, 00:26:38.545 "nvme_io_md": false, 00:26:38.545 "write_zeroes": true, 00:26:38.545 "zcopy": false, 00:26:38.545 "get_zone_info": false, 00:26:38.545 "zone_management": false, 00:26:38.545 "zone_append": false, 00:26:38.545 "compare": false, 00:26:38.545 "compare_and_write": false, 00:26:38.545 "abort": false, 00:26:38.545 "seek_hole": false, 00:26:38.545 "seek_data": false, 00:26:38.545 "copy": false, 00:26:38.545 "nvme_iov_md": false 00:26:38.545 }, 00:26:38.545 "driver_specific": { 00:26:38.545 "ftl": { 00:26:38.545 "base_bdev": "dab81f36-7920-4821-a65f-457c7d5e50b0", 00:26:38.545 "cache": "nvc0n1p0" 00:26:38.545 } 00:26:38.545 } 00:26:38.546 } 00:26:38.546 ]' 00:26:38.546 10:32:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:26:38.546 10:32:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:26:38.546 10:32:32 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:38.804 [2024-11-25 10:32:32.952202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:32.952456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:38.804 [2024-11-25 10:32:32.952595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:38.804 [2024-11-25 10:32:32.952630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:32.952717] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:38.804 [2024-11-25 10:32:32.956476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:32.956515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:38.804 [2024-11-25 10:32:32.956544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:26:38.804 [2024-11-25 10:32:32.956557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:32.957361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:32.957389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:38.804 [2024-11-25 10:32:32.957407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:26:38.804 [2024-11-25 10:32:32.957418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:32.961049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:32.961090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:38.804 [2024-11-25 10:32:32.961109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.585 ms 00:26:38.804 [2024-11-25 10:32:32.961122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:32.968684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:32.968723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:38.804 [2024-11-25 10:32:32.968743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.491 ms 00:26:38.804 [2024-11-25 10:32:32.968755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:33.000921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:33.000981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:38.804 [2024-11-25 10:32:33.001009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.014 ms 00:26:38.804 [2024-11-25 10:32:33.001021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:33.020140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:33.020190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:38.804 [2024-11-25 10:32:33.020214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.996 ms 00:26:38.804 [2024-11-25 10:32:33.020231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:33.020571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:33.020600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:38.804 [2024-11-25 10:32:33.020619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:26:38.804 [2024-11-25 10:32:33.020631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:33.051863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:33.051910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:38.804 [2024-11-25 10:32:33.051933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.182 ms 00:26:38.804 [2024-11-25 10:32:33.051946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:33.083001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:33.083185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:38.804 [2024-11-25 10:32:33.083225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.932 ms 00:26:38.804 [2024-11-25 10:32:33.083238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.804 [2024-11-25 10:32:33.113801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.804 [2024-11-25 10:32:33.113978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:38.804 [2024-11-25 10:32:33.114015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.440 ms 00:26:38.804 [2024-11-25 10:32:33.114029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.064 [2024-11-25 10:32:33.144574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.064 [2024-11-25 10:32:33.144646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:39.064 [2024-11-25 10:32:33.144672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.359 ms 00:26:39.064 [2024-11-25 10:32:33.144684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.064 [2024-11-25 10:32:33.144824] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:39.064 [2024-11-25 10:32:33.144862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.144882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.144895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.144911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.144924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.144943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.144958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.144972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.144991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:39.064 [2024-11-25 10:32:33.145698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.145987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:39.065 [2024-11-25 10:32:33.146385] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:39.065 [2024-11-25 10:32:33.146433] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 320fbe64-4a13-4fdf-8f16-2944badb2627 00:26:39.065 [2024-11-25 10:32:33.146456] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:39.065 [2024-11-25 10:32:33.146478] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:39.065 [2024-11-25 10:32:33.146497] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:39.065 [2024-11-25 10:32:33.146517] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:39.065 [2024-11-25 10:32:33.146533] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:39.065 [2024-11-25 10:32:33.146547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:39.065 [2024-11-25 10:32:33.146558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:39.065 [2024-11-25 10:32:33.146571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:39.065 [2024-11-25 10:32:33.146581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:39.065 [2024-11-25 10:32:33.146597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.065 [2024-11-25 10:32:33.146609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:39.065 [2024-11-25 10:32:33.146624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:26:39.065 [2024-11-25 10:32:33.146636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.065 [2024-11-25 10:32:33.164375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.065 [2024-11-25 10:32:33.164541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:39.065 [2024-11-25 10:32:33.164678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.683 ms 00:26:39.065 [2024-11-25 10:32:33.164824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.065 [2024-11-25 10:32:33.165498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.065 [2024-11-25 10:32:33.165632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:39.065 [2024-11-25 10:32:33.165747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:26:39.065 [2024-11-25 10:32:33.165816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.065 [2024-11-25 10:32:33.226636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.065 [2024-11-25 10:32:33.226877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:39.065 [2024-11-25 10:32:33.227017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.065 [2024-11-25 10:32:33.227069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.065 [2024-11-25 10:32:33.227283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.065 [2024-11-25 10:32:33.227339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:39.065 [2024-11-25 10:32:33.227455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.065 [2024-11-25 10:32:33.227504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.065 [2024-11-25 10:32:33.227642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.065 [2024-11-25 10:32:33.227697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:39.065 [2024-11-25 10:32:33.227750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.065 [2024-11-25 10:32:33.227863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.065 [2024-11-25 10:32:33.227960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.065 [2024-11-25 10:32:33.228151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:39.065 [2024-11-25 10:32:33.228208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.065 [2024-11-25 10:32:33.228249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.065 [2024-11-25 10:32:33.344389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.065 [2024-11-25 10:32:33.344629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:39.065 [2024-11-25 10:32:33.344765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.065 [2024-11-25 10:32:33.344836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.325 [2024-11-25 10:32:33.432524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.325 [2024-11-25 10:32:33.432784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:39.325 [2024-11-25 10:32:33.432931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.325 [2024-11-25 10:32:33.432985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.325 [2024-11-25 10:32:33.433235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.325 [2024-11-25 10:32:33.433361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:39.325 [2024-11-25 10:32:33.433419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.325 [2024-11-25 10:32:33.433437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.325 [2024-11-25 10:32:33.433530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.325 [2024-11-25 10:32:33.433545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:39.325 [2024-11-25 10:32:33.433560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.325 [2024-11-25 10:32:33.433572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.325 [2024-11-25 10:32:33.433749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.325 [2024-11-25 10:32:33.433793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:39.325 [2024-11-25 10:32:33.433815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.325 [2024-11-25 10:32:33.433827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.325 [2024-11-25 10:32:33.433919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.325 [2024-11-25 10:32:33.433938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:39.325 [2024-11-25 10:32:33.433954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.325 [2024-11-25 10:32:33.433966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.325 [2024-11-25 10:32:33.434049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.325 [2024-11-25 10:32:33.434065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:39.325 [2024-11-25 10:32:33.434083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.325 [2024-11-25 10:32:33.434094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.325 [2024-11-25 10:32:33.434183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.325 [2024-11-25 10:32:33.434207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:39.325 [2024-11-25 10:32:33.434222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.325 [2024-11-25 10:32:33.434234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.325 [2024-11-25 10:32:33.434555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 482.305 ms, result 0 00:26:39.325 true 00:26:39.325 10:32:33 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78367 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78367 ']' 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78367 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78367 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:39.325 killing process with pid 78367 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78367' 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78367 00:26:39.325 10:32:33 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78367 00:26:44.623 10:32:38 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:26:45.190 65536+0 records in 00:26:45.190 65536+0 records out 00:26:45.190 268435456 bytes (268 MB, 256 MiB) copied, 1.19754 s, 224 MB/s 00:26:45.190 10:32:39 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:45.448 [2024-11-25 10:32:39.546514] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:26:45.448 [2024-11-25 10:32:39.546690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78573 ] 00:26:45.448 [2024-11-25 10:32:39.733957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.706 [2024-11-25 10:32:39.891023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.965 [2024-11-25 10:32:40.257496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:45.965 [2024-11-25 10:32:40.257815] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:46.225 [2024-11-25 10:32:40.423876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.424122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:46.225 [2024-11-25 10:32:40.424154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:46.225 [2024-11-25 10:32:40.424169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.427821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.427865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:46.225 [2024-11-25 10:32:40.427882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.616 ms 00:26:46.225 [2024-11-25 10:32:40.427895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.428031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:46.225 [2024-11-25 10:32:40.428961] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:46.225 [2024-11-25 10:32:40.428994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.429009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:46.225 [2024-11-25 10:32:40.429022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:26:46.225 [2024-11-25 10:32:40.429034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.431038] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:46.225 [2024-11-25 10:32:40.447851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.447901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:46.225 [2024-11-25 10:32:40.447920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.815 ms 00:26:46.225 [2024-11-25 10:32:40.447934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.448054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.448078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:46.225 [2024-11-25 10:32:40.448092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:46.225 [2024-11-25 10:32:40.448104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.456575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.456624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:46.225 [2024-11-25 10:32:40.456642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.411 ms 00:26:46.225 [2024-11-25 10:32:40.456654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.456817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.456840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:46.225 [2024-11-25 10:32:40.456854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:46.225 [2024-11-25 10:32:40.456866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.456928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.456951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:46.225 [2024-11-25 10:32:40.456964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:46.225 [2024-11-25 10:32:40.456975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.457012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:46.225 [2024-11-25 10:32:40.461927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.462094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:46.225 [2024-11-25 10:32:40.462123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.923 ms 00:26:46.225 [2024-11-25 10:32:40.462136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.462229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.462250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:46.225 [2024-11-25 10:32:40.462264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:46.225 [2024-11-25 10:32:40.462275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.462308] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:46.225 [2024-11-25 10:32:40.462361] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:46.225 [2024-11-25 10:32:40.462409] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:46.225 [2024-11-25 10:32:40.462431] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:46.225 [2024-11-25 10:32:40.462547] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:46.225 [2024-11-25 10:32:40.462563] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:46.225 [2024-11-25 10:32:40.462579] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:46.225 [2024-11-25 10:32:40.462593] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:46.225 [2024-11-25 10:32:40.462614] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:46.225 [2024-11-25 10:32:40.462626] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:46.225 [2024-11-25 10:32:40.462639] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:46.225 [2024-11-25 10:32:40.462649] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:46.225 [2024-11-25 10:32:40.462661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:46.225 [2024-11-25 10:32:40.462673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.462685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:46.225 [2024-11-25 10:32:40.462698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:26:46.225 [2024-11-25 10:32:40.462709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.462833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.225 [2024-11-25 10:32:40.462853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:46.225 [2024-11-25 10:32:40.462873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:26:46.225 [2024-11-25 10:32:40.462884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.225 [2024-11-25 10:32:40.463006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:46.225 [2024-11-25 10:32:40.463025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:46.225 [2024-11-25 10:32:40.463038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:46.225 [2024-11-25 10:32:40.463051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.225 [2024-11-25 10:32:40.463063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:46.225 [2024-11-25 10:32:40.463074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:46.225 [2024-11-25 10:32:40.463086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:46.225 [2024-11-25 10:32:40.463097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:46.225 [2024-11-25 10:32:40.463109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:46.225 [2024-11-25 10:32:40.463121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:46.225 [2024-11-25 10:32:40.463131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:46.225 [2024-11-25 10:32:40.463145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:46.225 [2024-11-25 10:32:40.463156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:46.225 [2024-11-25 10:32:40.463190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:46.225 [2024-11-25 10:32:40.463202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:46.225 [2024-11-25 10:32:40.463214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.225 [2024-11-25 10:32:40.463225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:46.225 [2024-11-25 10:32:40.463236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:46.225 [2024-11-25 10:32:40.463246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.225 [2024-11-25 10:32:40.463258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:46.225 [2024-11-25 10:32:40.463268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:46.225 [2024-11-25 10:32:40.463278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.225 [2024-11-25 10:32:40.463288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:46.225 [2024-11-25 10:32:40.463299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:46.225 [2024-11-25 10:32:40.463309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.225 [2024-11-25 10:32:40.463320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:46.225 [2024-11-25 10:32:40.463330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:46.225 [2024-11-25 10:32:40.463341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.225 [2024-11-25 10:32:40.463352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:46.225 [2024-11-25 10:32:40.463362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:46.225 [2024-11-25 10:32:40.463372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.226 [2024-11-25 10:32:40.463383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:46.226 [2024-11-25 10:32:40.463393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:46.226 [2024-11-25 10:32:40.463404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:46.226 [2024-11-25 10:32:40.463415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:46.226 [2024-11-25 10:32:40.463426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:46.226 [2024-11-25 10:32:40.463436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:46.226 [2024-11-25 10:32:40.463446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:46.226 [2024-11-25 10:32:40.463457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:46.226 [2024-11-25 10:32:40.463467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.226 [2024-11-25 10:32:40.463478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:46.226 [2024-11-25 10:32:40.463489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:46.226 [2024-11-25 10:32:40.463500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.226 [2024-11-25 10:32:40.463513] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:46.226 [2024-11-25 10:32:40.463526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:46.226 [2024-11-25 10:32:40.463538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:46.226 [2024-11-25 10:32:40.463554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.226 [2024-11-25 10:32:40.463576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:46.226 [2024-11-25 10:32:40.463588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:46.226 [2024-11-25 10:32:40.463600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:46.226 [2024-11-25 10:32:40.463611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:46.226 [2024-11-25 10:32:40.463622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:46.226 [2024-11-25 10:32:40.463633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:46.226 [2024-11-25 10:32:40.463646] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:46.226 [2024-11-25 10:32:40.463660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:46.226 [2024-11-25 10:32:40.463673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:46.226 [2024-11-25 10:32:40.463686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:46.226 [2024-11-25 10:32:40.463698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:46.226 [2024-11-25 10:32:40.463709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:46.226 [2024-11-25 10:32:40.463721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:46.226 [2024-11-25 10:32:40.463733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:46.226 [2024-11-25 10:32:40.463745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:46.226 [2024-11-25 10:32:40.463756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:46.226 [2024-11-25 10:32:40.463782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:46.226 [2024-11-25 10:32:40.463799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:46.226 [2024-11-25 10:32:40.463810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:46.226 [2024-11-25 10:32:40.463822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:46.226 [2024-11-25 10:32:40.463833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:46.226 [2024-11-25 10:32:40.463845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:46.226 [2024-11-25 10:32:40.463858] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:46.226 [2024-11-25 10:32:40.463871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:46.226 [2024-11-25 10:32:40.463886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:46.226 [2024-11-25 10:32:40.463898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:46.226 [2024-11-25 10:32:40.463910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:46.226 [2024-11-25 10:32:40.463921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:46.226 [2024-11-25 10:32:40.463935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.226 [2024-11-25 10:32:40.463948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:46.226 [2024-11-25 10:32:40.463966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:26:46.226 [2024-11-25 10:32:40.463978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.226 [2024-11-25 10:32:40.503993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.226 [2024-11-25 10:32:40.504067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:46.226 [2024-11-25 10:32:40.504088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.937 ms 00:26:46.226 [2024-11-25 10:32:40.504100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.226 [2024-11-25 10:32:40.504301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.226 [2024-11-25 10:32:40.504328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:46.226 [2024-11-25 10:32:40.504343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:46.226 [2024-11-25 10:32:40.504355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.486 [2024-11-25 10:32:40.579364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.486 [2024-11-25 10:32:40.579444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:46.486 [2024-11-25 10:32:40.579470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.973 ms 00:26:46.486 [2024-11-25 10:32:40.579494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.486 [2024-11-25 10:32:40.579704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.579730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:46.487 [2024-11-25 10:32:40.579749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:46.487 [2024-11-25 10:32:40.579763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.580445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.580477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:46.487 [2024-11-25 10:32:40.580495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:26:46.487 [2024-11-25 10:32:40.580520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.580738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.580763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:46.487 [2024-11-25 10:32:40.580800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:26:46.487 [2024-11-25 10:32:40.580816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.605053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.605277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:46.487 [2024-11-25 10:32:40.605312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.194 ms 00:26:46.487 [2024-11-25 10:32:40.605329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.626563] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:46.487 [2024-11-25 10:32:40.626620] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:46.487 [2024-11-25 10:32:40.626646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.626661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:46.487 [2024-11-25 10:32:40.626678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.113 ms 00:26:46.487 [2024-11-25 10:32:40.626692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.663816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.664031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:46.487 [2024-11-25 10:32:40.664083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.960 ms 00:26:46.487 [2024-11-25 10:32:40.664100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.683622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.683834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:46.487 [2024-11-25 10:32:40.683873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.398 ms 00:26:46.487 [2024-11-25 10:32:40.683890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.703175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.703366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:46.487 [2024-11-25 10:32:40.703400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.084 ms 00:26:46.487 [2024-11-25 10:32:40.703418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.704571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.704610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:46.487 [2024-11-25 10:32:40.704630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:26:46.487 [2024-11-25 10:32:40.704647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.791087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.487 [2024-11-25 10:32:40.791158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:46.487 [2024-11-25 10:32:40.791181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.390 ms 00:26:46.487 [2024-11-25 10:32:40.791194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.487 [2024-11-25 10:32:40.803695] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:46.747 [2024-11-25 10:32:40.824143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.747 [2024-11-25 10:32:40.824211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:46.748 [2024-11-25 10:32:40.824235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.752 ms 00:26:46.748 [2024-11-25 10:32:40.824247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.748 [2024-11-25 10:32:40.824407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.748 [2024-11-25 10:32:40.824434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:46.748 [2024-11-25 10:32:40.824449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:46.748 [2024-11-25 10:32:40.824462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.748 [2024-11-25 10:32:40.824540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.748 [2024-11-25 10:32:40.824558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:46.748 [2024-11-25 10:32:40.824572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:46.748 [2024-11-25 10:32:40.824584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.748 [2024-11-25 10:32:40.824627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.748 [2024-11-25 10:32:40.824644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:46.748 [2024-11-25 10:32:40.824662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:46.748 [2024-11-25 10:32:40.824675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.748 [2024-11-25 10:32:40.824724] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:46.748 [2024-11-25 10:32:40.824743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.748 [2024-11-25 10:32:40.824756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:46.748 [2024-11-25 10:32:40.824799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:46.748 [2024-11-25 10:32:40.824826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.748 [2024-11-25 10:32:40.856267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.748 [2024-11-25 10:32:40.856328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:46.748 [2024-11-25 10:32:40.856349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.389 ms 00:26:46.748 [2024-11-25 10:32:40.856362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.748 [2024-11-25 10:32:40.856520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.748 [2024-11-25 10:32:40.856543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:46.748 [2024-11-25 10:32:40.856557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:46.748 [2024-11-25 10:32:40.856570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.748 [2024-11-25 10:32:40.857893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:46.748 [2024-11-25 10:32:40.862271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 433.643 ms, result 0 00:26:46.748 [2024-11-25 10:32:40.863059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:46.748 [2024-11-25 10:32:40.879267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:47.686  [2024-11-25T10:32:42.987Z] Copying: 26/256 [MB] (26 MBps) [2024-11-25T10:32:43.921Z] Copying: 54/256 [MB] (28 MBps) [2024-11-25T10:32:45.296Z] Copying: 82/256 [MB] (27 MBps) [2024-11-25T10:32:46.230Z] Copying: 108/256 [MB] (26 MBps) [2024-11-25T10:32:47.167Z] Copying: 135/256 [MB] (27 MBps) [2024-11-25T10:32:48.103Z] Copying: 159/256 [MB] (24 MBps) [2024-11-25T10:32:49.040Z] Copying: 186/256 [MB] (26 MBps) [2024-11-25T10:32:49.974Z] Copying: 211/256 [MB] (25 MBps) [2024-11-25T10:32:50.909Z] Copying: 236/256 [MB] (25 MBps) [2024-11-25T10:32:50.909Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-25 10:32:50.672084] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:56.576 [2024-11-25 10:32:50.684767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.576 [2024-11-25 10:32:50.684957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:56.576 [2024-11-25 10:32:50.684988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:56.576 [2024-11-25 10:32:50.685012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.576 [2024-11-25 10:32:50.685066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:56.576 [2024-11-25 10:32:50.688730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.576 [2024-11-25 10:32:50.688897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:56.576 [2024-11-25 10:32:50.688924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.640 ms 00:26:56.576 [2024-11-25 10:32:50.688937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.576 [2024-11-25 10:32:50.690859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.576 [2024-11-25 10:32:50.690909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:56.576 [2024-11-25 10:32:50.690927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.874 ms 00:26:56.576 [2024-11-25 10:32:50.690939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.576 [2024-11-25 10:32:50.698003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.576 [2024-11-25 10:32:50.698044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:56.576 [2024-11-25 10:32:50.698069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.038 ms 00:26:56.576 [2024-11-25 10:32:50.698081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.576 [2024-11-25 10:32:50.705522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.576 [2024-11-25 10:32:50.705560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:56.576 [2024-11-25 10:32:50.705577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.377 ms 00:26:56.576 [2024-11-25 10:32:50.705589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.576 [2024-11-25 10:32:50.736245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.576 [2024-11-25 10:32:50.736417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:56.576 [2024-11-25 10:32:50.736457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.599 ms 00:26:56.576 [2024-11-25 10:32:50.736470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.576 [2024-11-25 10:32:50.754892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.577 [2024-11-25 10:32:50.755107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:56.577 [2024-11-25 10:32:50.755149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.351 ms 00:26:56.577 [2024-11-25 10:32:50.755168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.577 [2024-11-25 10:32:50.755380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.577 [2024-11-25 10:32:50.755403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:56.577 [2024-11-25 10:32:50.755417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:26:56.577 [2024-11-25 10:32:50.755429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.577 [2024-11-25 10:32:50.786599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.577 [2024-11-25 10:32:50.786796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:56.577 [2024-11-25 10:32:50.786826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.145 ms 00:26:56.577 [2024-11-25 10:32:50.786839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.577 [2024-11-25 10:32:50.816933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.577 [2024-11-25 10:32:50.816983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:56.577 [2024-11-25 10:32:50.817012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.009 ms 00:26:56.577 [2024-11-25 10:32:50.817025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.577 [2024-11-25 10:32:50.848115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.577 [2024-11-25 10:32:50.848200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:56.577 [2024-11-25 10:32:50.848222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.015 ms 00:26:56.577 [2024-11-25 10:32:50.848244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.577 [2024-11-25 10:32:50.880040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.577 [2024-11-25 10:32:50.880119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:56.577 [2024-11-25 10:32:50.880139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.629 ms 00:26:56.577 [2024-11-25 10:32:50.880152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.577 [2024-11-25 10:32:50.880272] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:56.577 [2024-11-25 10:32:50.880309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.880997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:56.577 [2024-11-25 10:32:50.881286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:56.578 [2024-11-25 10:32:50.881679] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:56.578 [2024-11-25 10:32:50.881696] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 320fbe64-4a13-4fdf-8f16-2944badb2627 00:26:56.578 [2024-11-25 10:32:50.881709] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:56.578 [2024-11-25 10:32:50.881720] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:56.578 [2024-11-25 10:32:50.881733] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:56.578 [2024-11-25 10:32:50.881745] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:56.578 [2024-11-25 10:32:50.881756] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:56.578 [2024-11-25 10:32:50.881781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:56.578 [2024-11-25 10:32:50.881796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:56.578 [2024-11-25 10:32:50.881818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:56.578 [2024-11-25 10:32:50.881829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:56.578 [2024-11-25 10:32:50.881851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.578 [2024-11-25 10:32:50.881863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:56.578 [2024-11-25 10:32:50.881883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:26:56.578 [2024-11-25 10:32:50.881894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.578 [2024-11-25 10:32:50.899907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.578 [2024-11-25 10:32:50.899979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:56.578 [2024-11-25 10:32:50.899999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.980 ms 00:26:56.578 [2024-11-25 10:32:50.900011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.578 [2024-11-25 10:32:50.900547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.578 [2024-11-25 10:32:50.900589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:56.578 [2024-11-25 10:32:50.900605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:26:56.578 [2024-11-25 10:32:50.900617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:50.948163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:50.948230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:56.837 [2024-11-25 10:32:50.948268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:50.948281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:50.948443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:50.948466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:56.837 [2024-11-25 10:32:50.948480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:50.948492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:50.948562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:50.948582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:56.837 [2024-11-25 10:32:50.948595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:50.948607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:50.948633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:50.948648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:56.837 [2024-11-25 10:32:50.948668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:50.948681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:51.060110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:51.060190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:56.837 [2024-11-25 10:32:51.060211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:51.060223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:51.146736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:51.147007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:56.837 [2024-11-25 10:32:51.147045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:51.147058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:51.147157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:51.147176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:56.837 [2024-11-25 10:32:51.147190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:51.147202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:51.147240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:51.147255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:56.837 [2024-11-25 10:32:51.147267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:51.147285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:51.147422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:51.147443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:56.837 [2024-11-25 10:32:51.147456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:51.147469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:51.147522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:51.147540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:56.837 [2024-11-25 10:32:51.147553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:51.147565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:51.147633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:51.147650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:56.837 [2024-11-25 10:32:51.147664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:51.147676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:51.147733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.837 [2024-11-25 10:32:51.147750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:56.837 [2024-11-25 10:32:51.147763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.837 [2024-11-25 10:32:51.147810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.837 [2024-11-25 10:32:51.148019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.244 ms, result 0 00:26:58.213 00:26:58.213 00:26:58.213 10:32:52 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78705 00:26:58.213 10:32:52 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:26:58.213 10:32:52 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78705 00:26:58.213 10:32:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78705 ']' 00:26:58.213 10:32:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.213 10:32:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.213 10:32:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.213 10:32:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.213 10:32:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:58.472 [2024-11-25 10:32:52.635307] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:26:58.472 [2024-11-25 10:32:52.635881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78705 ] 00:26:58.730 [2024-11-25 10:32:52.820282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.730 [2024-11-25 10:32:52.951093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.666 10:32:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.666 10:32:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:26:59.666 10:32:53 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:26:59.924 [2024-11-25 10:32:54.109527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:59.924 [2024-11-25 10:32:54.109847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:00.184 [2024-11-25 10:32:54.296262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.296325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:00.184 [2024-11-25 10:32:54.296356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:00.184 [2024-11-25 10:32:54.296370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.300660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.300724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:00.184 [2024-11-25 10:32:54.300752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.260 ms 00:27:00.184 [2024-11-25 10:32:54.300765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.300927] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:00.184 [2024-11-25 10:32:54.301864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:00.184 [2024-11-25 10:32:54.301909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.301925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:00.184 [2024-11-25 10:32:54.301942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:27:00.184 [2024-11-25 10:32:54.301954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.303987] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:00.184 [2024-11-25 10:32:54.321109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.321157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:00.184 [2024-11-25 10:32:54.321178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.129 ms 00:27:00.184 [2024-11-25 10:32:54.321193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.321309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.321334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:00.184 [2024-11-25 10:32:54.321349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:00.184 [2024-11-25 10:32:54.321364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.330089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.330162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:00.184 [2024-11-25 10:32:54.330179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.656 ms 00:27:00.184 [2024-11-25 10:32:54.330195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.330398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.330428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:00.184 [2024-11-25 10:32:54.330443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:27:00.184 [2024-11-25 10:32:54.330462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.330524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.330547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:00.184 [2024-11-25 10:32:54.330562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:00.184 [2024-11-25 10:32:54.330579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.330619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:00.184 [2024-11-25 10:32:54.335747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.335808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:00.184 [2024-11-25 10:32:54.335834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.132 ms 00:27:00.184 [2024-11-25 10:32:54.335849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.335958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.335979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:00.184 [2024-11-25 10:32:54.335999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:00.184 [2024-11-25 10:32:54.336017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.336056] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:00.184 [2024-11-25 10:32:54.336095] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:00.184 [2024-11-25 10:32:54.336158] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:00.184 [2024-11-25 10:32:54.336184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:00.184 [2024-11-25 10:32:54.336302] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:00.184 [2024-11-25 10:32:54.336320] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:00.184 [2024-11-25 10:32:54.336344] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:00.184 [2024-11-25 10:32:54.336367] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:00.184 [2024-11-25 10:32:54.336389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:00.184 [2024-11-25 10:32:54.336404] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:00.184 [2024-11-25 10:32:54.336421] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:00.184 [2024-11-25 10:32:54.336434] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:00.184 [2024-11-25 10:32:54.336455] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:00.184 [2024-11-25 10:32:54.336470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.184 [2024-11-25 10:32:54.336487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:00.184 [2024-11-25 10:32:54.336501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:27:00.184 [2024-11-25 10:32:54.336518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.184 [2024-11-25 10:32:54.336623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.185 [2024-11-25 10:32:54.336646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:00.185 [2024-11-25 10:32:54.336660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:00.185 [2024-11-25 10:32:54.336678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.185 [2024-11-25 10:32:54.336808] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:00.185 [2024-11-25 10:32:54.336836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:00.185 [2024-11-25 10:32:54.336851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:00.185 [2024-11-25 10:32:54.336870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.185 [2024-11-25 10:32:54.336884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:00.185 [2024-11-25 10:32:54.336897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:00.185 [2024-11-25 10:32:54.336909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:00.185 [2024-11-25 10:32:54.336927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:00.185 [2024-11-25 10:32:54.336949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:00.185 [2024-11-25 10:32:54.336963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:00.185 [2024-11-25 10:32:54.336975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:00.185 [2024-11-25 10:32:54.336989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:00.185 [2024-11-25 10:32:54.337000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:00.185 [2024-11-25 10:32:54.337013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:00.185 [2024-11-25 10:32:54.337025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:00.185 [2024-11-25 10:32:54.337039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:00.185 [2024-11-25 10:32:54.337064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:00.185 [2024-11-25 10:32:54.337075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:00.185 [2024-11-25 10:32:54.337111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.185 [2024-11-25 10:32:54.337136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:00.185 [2024-11-25 10:32:54.337152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.185 [2024-11-25 10:32:54.337177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:00.185 [2024-11-25 10:32:54.337189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.185 [2024-11-25 10:32:54.337213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:00.185 [2024-11-25 10:32:54.337227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.185 [2024-11-25 10:32:54.337256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:00.185 [2024-11-25 10:32:54.337269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:00.185 [2024-11-25 10:32:54.337301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:00.185 [2024-11-25 10:32:54.337319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:00.185 [2024-11-25 10:32:54.337331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:00.185 [2024-11-25 10:32:54.337366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:00.185 [2024-11-25 10:32:54.337380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:00.185 [2024-11-25 10:32:54.337402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:00.185 [2024-11-25 10:32:54.337435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:00.185 [2024-11-25 10:32:54.337449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337466] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:00.185 [2024-11-25 10:32:54.337480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:00.185 [2024-11-25 10:32:54.337505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:00.185 [2024-11-25 10:32:54.337518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.185 [2024-11-25 10:32:54.337535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:00.185 [2024-11-25 10:32:54.337548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:00.185 [2024-11-25 10:32:54.337565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:00.185 [2024-11-25 10:32:54.337579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:00.185 [2024-11-25 10:32:54.337596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:00.185 [2024-11-25 10:32:54.337608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:00.185 [2024-11-25 10:32:54.337627] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:00.185 [2024-11-25 10:32:54.337644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.185 [2024-11-25 10:32:54.337667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:00.185 [2024-11-25 10:32:54.337681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:00.185 [2024-11-25 10:32:54.337701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:00.185 [2024-11-25 10:32:54.337714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:00.185 [2024-11-25 10:32:54.337731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:00.185 [2024-11-25 10:32:54.337744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:00.185 [2024-11-25 10:32:54.337762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:00.185 [2024-11-25 10:32:54.337788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:00.185 [2024-11-25 10:32:54.337809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:00.185 [2024-11-25 10:32:54.337822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:00.185 [2024-11-25 10:32:54.337839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:00.185 [2024-11-25 10:32:54.337852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:00.185 [2024-11-25 10:32:54.337870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:00.185 [2024-11-25 10:32:54.337884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:00.185 [2024-11-25 10:32:54.337901] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:00.185 [2024-11-25 10:32:54.337916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.185 [2024-11-25 10:32:54.337939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:00.185 [2024-11-25 10:32:54.337953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:00.185 [2024-11-25 10:32:54.337971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:00.185 [2024-11-25 10:32:54.337996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:00.185 [2024-11-25 10:32:54.338016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.185 [2024-11-25 10:32:54.338030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:00.185 [2024-11-25 10:32:54.338049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:27:00.185 [2024-11-25 10:32:54.338062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.185 [2024-11-25 10:32:54.381120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.185 [2024-11-25 10:32:54.381408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:00.185 [2024-11-25 10:32:54.381549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.955 ms 00:27:00.185 [2024-11-25 10:32:54.381605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.185 [2024-11-25 10:32:54.382011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.185 [2024-11-25 10:32:54.382148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:00.185 [2024-11-25 10:32:54.382273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:00.185 [2024-11-25 10:32:54.382439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.185 [2024-11-25 10:32:54.430812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.185 [2024-11-25 10:32:54.431069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:00.185 [2024-11-25 10:32:54.431216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.287 ms 00:27:00.185 [2024-11-25 10:32:54.431271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.185 [2024-11-25 10:32:54.431543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.185 [2024-11-25 10:32:54.431675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:00.185 [2024-11-25 10:32:54.431804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:00.185 [2024-11-25 10:32:54.431865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.185 [2024-11-25 10:32:54.432567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.185 [2024-11-25 10:32:54.432688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:00.186 [2024-11-25 10:32:54.432834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:27:00.186 [2024-11-25 10:32:54.432889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.186 [2024-11-25 10:32:54.433169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.186 [2024-11-25 10:32:54.433289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:00.186 [2024-11-25 10:32:54.433402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:27:00.186 [2024-11-25 10:32:54.433515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.186 [2024-11-25 10:32:54.457021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.186 [2024-11-25 10:32:54.457205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:00.186 [2024-11-25 10:32:54.457398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.426 ms 00:27:00.186 [2024-11-25 10:32:54.457452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.186 [2024-11-25 10:32:54.474708] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:00.186 [2024-11-25 10:32:54.474894] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:00.186 [2024-11-25 10:32:54.475022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.186 [2024-11-25 10:32:54.475043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:00.186 [2024-11-25 10:32:54.475064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.368 ms 00:27:00.186 [2024-11-25 10:32:54.475078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.186 [2024-11-25 10:32:54.504942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.186 [2024-11-25 10:32:54.505104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:00.186 [2024-11-25 10:32:54.505275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.759 ms 00:27:00.186 [2024-11-25 10:32:54.505330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.444 [2024-11-25 10:32:54.520975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.444 [2024-11-25 10:32:54.521120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:00.444 [2024-11-25 10:32:54.521250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.434 ms 00:27:00.444 [2024-11-25 10:32:54.521303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.444 [2024-11-25 10:32:54.536781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.444 [2024-11-25 10:32:54.536959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:00.444 [2024-11-25 10:32:54.536998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.342 ms 00:27:00.444 [2024-11-25 10:32:54.537014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.444 [2024-11-25 10:32:54.537918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.444 [2024-11-25 10:32:54.537950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:00.444 [2024-11-25 10:32:54.537973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:27:00.444 [2024-11-25 10:32:54.537986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.444 [2024-11-25 10:32:54.640762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.444 [2024-11-25 10:32:54.640915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:00.444 [2024-11-25 10:32:54.640972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.724 ms 00:27:00.444 [2024-11-25 10:32:54.640994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.444 [2024-11-25 10:32:54.664017] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:00.444 [2024-11-25 10:32:54.690963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.444 [2024-11-25 10:32:54.691066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:00.444 [2024-11-25 10:32:54.691104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.660 ms 00:27:00.444 [2024-11-25 10:32:54.691123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.444 [2024-11-25 10:32:54.691319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.444 [2024-11-25 10:32:54.691347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:00.444 [2024-11-25 10:32:54.691363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:00.444 [2024-11-25 10:32:54.691381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.444 [2024-11-25 10:32:54.691480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.444 [2024-11-25 10:32:54.691514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:00.444 [2024-11-25 10:32:54.691529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:00.444 [2024-11-25 10:32:54.691548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.444 [2024-11-25 10:32:54.691595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.444 [2024-11-25 10:32:54.691618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:00.444 [2024-11-25 10:32:54.691632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:00.444 [2024-11-25 10:32:54.691651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.444 [2024-11-25 10:32:54.691708] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:00.444 [2024-11-25 10:32:54.691748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.444 [2024-11-25 10:32:54.691762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:00.444 [2024-11-25 10:32:54.691838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:00.444 [2024-11-25 10:32:54.691852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.445 [2024-11-25 10:32:54.738480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.445 [2024-11-25 10:32:54.738574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:00.445 [2024-11-25 10:32:54.738612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.542 ms 00:27:00.445 [2024-11-25 10:32:54.738631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.445 [2024-11-25 10:32:54.738925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.445 [2024-11-25 10:32:54.738956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:00.445 [2024-11-25 10:32:54.738984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:00.445 [2024-11-25 10:32:54.739018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.445 [2024-11-25 10:32:54.740573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:00.445 [2024-11-25 10:32:54.746421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 443.786 ms, result 0 00:27:00.445 [2024-11-25 10:32:54.747965] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:00.702 Some configs were skipped because the RPC state that can call them passed over. 00:27:00.702 10:32:54 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:27:00.960 [2024-11-25 10:32:55.121444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.960 [2024-11-25 10:32:55.121537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:00.960 [2024-11-25 10:32:55.121562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:27:00.960 [2024-11-25 10:32:55.121582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.960 [2024-11-25 10:32:55.121641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.809 ms, result 0 00:27:00.960 true 00:27:00.961 10:32:55 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:27:01.219 [2024-11-25 10:32:55.442880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.219 [2024-11-25 10:32:55.443236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:01.219 [2024-11-25 10:32:55.443277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:27:01.219 [2024-11-25 10:32:55.443292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.219 [2024-11-25 10:32:55.443363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.828 ms, result 0 00:27:01.219 true 00:27:01.219 10:32:55 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78705 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78705 ']' 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78705 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78705 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78705' 00:27:01.219 killing process with pid 78705 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78705 00:27:01.219 10:32:55 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78705 00:27:02.598 [2024-11-25 10:32:56.557513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.557875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:02.598 [2024-11-25 10:32:56.558012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:02.598 [2024-11-25 10:32:56.558043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.558086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:02.598 [2024-11-25 10:32:56.561917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.561954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:02.598 [2024-11-25 10:32:56.561976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.802 ms 00:27:02.598 [2024-11-25 10:32:56.561988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.562325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.562352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:02.598 [2024-11-25 10:32:56.562396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:27:02.598 [2024-11-25 10:32:56.562410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.566621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.566691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:02.598 [2024-11-25 10:32:56.566730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.175 ms 00:27:02.598 [2024-11-25 10:32:56.566752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.581777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.581853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:02.598 [2024-11-25 10:32:56.581906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.890 ms 00:27:02.598 [2024-11-25 10:32:56.581927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.604079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.604152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:02.598 [2024-11-25 10:32:56.604203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.988 ms 00:27:02.598 [2024-11-25 10:32:56.604242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.618686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.618990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:02.598 [2024-11-25 10:32:56.619051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.324 ms 00:27:02.598 [2024-11-25 10:32:56.619073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.619366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.619401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:02.598 [2024-11-25 10:32:56.619429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:27:02.598 [2024-11-25 10:32:56.619450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.641559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.641648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:02.598 [2024-11-25 10:32:56.641702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.058 ms 00:27:02.598 [2024-11-25 10:32:56.641723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.663582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.663878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:02.598 [2024-11-25 10:32:56.663951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.688 ms 00:27:02.598 [2024-11-25 10:32:56.663978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.677850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.677895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:02.598 [2024-11-25 10:32:56.677922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.743 ms 00:27:02.598 [2024-11-25 10:32:56.677936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.690607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.598 [2024-11-25 10:32:56.690651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:02.598 [2024-11-25 10:32:56.690678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.550 ms 00:27:02.598 [2024-11-25 10:32:56.690692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.598 [2024-11-25 10:32:56.690814] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:02.598 [2024-11-25 10:32:56.690857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.690881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.690897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.690921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.690935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.690960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.690974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.690993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:02.598 [2024-11-25 10:32:56.691641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.691990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:02.599 [2024-11-25 10:32:56.692738] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:02.599 [2024-11-25 10:32:56.692784] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 320fbe64-4a13-4fdf-8f16-2944badb2627 00:27:02.599 [2024-11-25 10:32:56.692814] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:02.599 [2024-11-25 10:32:56.692834] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:02.599 [2024-11-25 10:32:56.692846] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:02.599 [2024-11-25 10:32:56.692861] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:02.599 [2024-11-25 10:32:56.692873] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:02.599 [2024-11-25 10:32:56.692887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:02.599 [2024-11-25 10:32:56.692899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:02.599 [2024-11-25 10:32:56.692912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:02.599 [2024-11-25 10:32:56.692923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:02.599 [2024-11-25 10:32:56.692938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.599 [2024-11-25 10:32:56.692950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:02.599 [2024-11-25 10:32:56.692965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:27:02.599 [2024-11-25 10:32:56.692977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.599 [2024-11-25 10:32:56.710269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.599 [2024-11-25 10:32:56.710497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:02.599 [2024-11-25 10:32:56.710543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.234 ms 00:27:02.599 [2024-11-25 10:32:56.710557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.599 [2024-11-25 10:32:56.711128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.599 [2024-11-25 10:32:56.711156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:02.599 [2024-11-25 10:32:56.711174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:27:02.599 [2024-11-25 10:32:56.711190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.599 [2024-11-25 10:32:56.771937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.599 [2024-11-25 10:32:56.772021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:02.599 [2024-11-25 10:32:56.772056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.599 [2024-11-25 10:32:56.772071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.599 [2024-11-25 10:32:56.772264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.599 [2024-11-25 10:32:56.772285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:02.599 [2024-11-25 10:32:56.772302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.599 [2024-11-25 10:32:56.772319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.599 [2024-11-25 10:32:56.772399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.599 [2024-11-25 10:32:56.772419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:02.599 [2024-11-25 10:32:56.772441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.599 [2024-11-25 10:32:56.772454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.599 [2024-11-25 10:32:56.772485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.599 [2024-11-25 10:32:56.772500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:02.599 [2024-11-25 10:32:56.772515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.599 [2024-11-25 10:32:56.772527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.599 [2024-11-25 10:32:56.882622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.600 [2024-11-25 10:32:56.882700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:02.600 [2024-11-25 10:32:56.882725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.600 [2024-11-25 10:32:56.882738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.873 [2024-11-25 10:32:56.968861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.873 [2024-11-25 10:32:56.968941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:02.873 [2024-11-25 10:32:56.968967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.873 [2024-11-25 10:32:56.968984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.873 [2024-11-25 10:32:56.969108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.873 [2024-11-25 10:32:56.969127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:02.873 [2024-11-25 10:32:56.969146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.873 [2024-11-25 10:32:56.969159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.873 [2024-11-25 10:32:56.969203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.873 [2024-11-25 10:32:56.969218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:02.873 [2024-11-25 10:32:56.969233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.873 [2024-11-25 10:32:56.969245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.873 [2024-11-25 10:32:56.969395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.873 [2024-11-25 10:32:56.969415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:02.873 [2024-11-25 10:32:56.969432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.873 [2024-11-25 10:32:56.969444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.873 [2024-11-25 10:32:56.969505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.873 [2024-11-25 10:32:56.969524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:02.873 [2024-11-25 10:32:56.969540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.873 [2024-11-25 10:32:56.969552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.873 [2024-11-25 10:32:56.969608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.873 [2024-11-25 10:32:56.969628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:02.873 [2024-11-25 10:32:56.969647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.873 [2024-11-25 10:32:56.969660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.873 [2024-11-25 10:32:56.969724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:02.873 [2024-11-25 10:32:56.969742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:02.873 [2024-11-25 10:32:56.969757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:02.873 [2024-11-25 10:32:56.969798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.873 [2024-11-25 10:32:56.969993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.449 ms, result 0 00:27:03.817 10:32:57 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:03.817 10:32:57 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:03.817 [2024-11-25 10:32:58.050207] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:27:03.817 [2024-11-25 10:32:58.050424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78776 ] 00:27:04.074 [2024-11-25 10:32:58.237126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.074 [2024-11-25 10:32:58.371232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.641 [2024-11-25 10:32:58.730393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.641 [2024-11-25 10:32:58.730479] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.641 [2024-11-25 10:32:58.895853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.896122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:04.641 [2024-11-25 10:32:58.896169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:04.641 [2024-11-25 10:32:58.896184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.899739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.899923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:04.641 [2024-11-25 10:32:58.899953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.517 ms 00:27:04.641 [2024-11-25 10:32:58.899966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.900158] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:04.641 [2024-11-25 10:32:58.901120] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:04.641 [2024-11-25 10:32:58.901164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.901180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:04.641 [2024-11-25 10:32:58.901193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:27:04.641 [2024-11-25 10:32:58.901205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.903307] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:04.641 [2024-11-25 10:32:58.920282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.920352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:04.641 [2024-11-25 10:32:58.920373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.975 ms 00:27:04.641 [2024-11-25 10:32:58.920386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.920537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.920560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:04.641 [2024-11-25 10:32:58.920575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:04.641 [2024-11-25 10:32:58.920586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.929655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.929723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:04.641 [2024-11-25 10:32:58.929744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.001 ms 00:27:04.641 [2024-11-25 10:32:58.929756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.929959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.929985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:04.641 [2024-11-25 10:32:58.930000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:27:04.641 [2024-11-25 10:32:58.930012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.930058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.930079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:04.641 [2024-11-25 10:32:58.930092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:04.641 [2024-11-25 10:32:58.930104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.930139] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:04.641 [2024-11-25 10:32:58.935247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.935289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:04.641 [2024-11-25 10:32:58.935306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.120 ms 00:27:04.641 [2024-11-25 10:32:58.935317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.935416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.935435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:04.641 [2024-11-25 10:32:58.935448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:04.641 [2024-11-25 10:32:58.935459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.935492] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:04.641 [2024-11-25 10:32:58.935531] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:04.641 [2024-11-25 10:32:58.935576] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:04.641 [2024-11-25 10:32:58.935598] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:04.641 [2024-11-25 10:32:58.935712] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:04.641 [2024-11-25 10:32:58.935727] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:04.641 [2024-11-25 10:32:58.935743] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:04.641 [2024-11-25 10:32:58.935758] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:04.641 [2024-11-25 10:32:58.935807] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:04.641 [2024-11-25 10:32:58.935822] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:04.641 [2024-11-25 10:32:58.935834] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:04.641 [2024-11-25 10:32:58.935855] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:04.641 [2024-11-25 10:32:58.935867] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:04.641 [2024-11-25 10:32:58.935879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.935892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:04.641 [2024-11-25 10:32:58.935904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:27:04.641 [2024-11-25 10:32:58.935915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.936018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.641 [2024-11-25 10:32:58.936035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:04.641 [2024-11-25 10:32:58.936054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:04.641 [2024-11-25 10:32:58.936066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.641 [2024-11-25 10:32:58.936185] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:04.641 [2024-11-25 10:32:58.936214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:04.641 [2024-11-25 10:32:58.936228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.641 [2024-11-25 10:32:58.936240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.641 [2024-11-25 10:32:58.936252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:04.641 [2024-11-25 10:32:58.936263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:04.641 [2024-11-25 10:32:58.936273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:04.641 [2024-11-25 10:32:58.936285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:04.641 [2024-11-25 10:32:58.936296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:04.641 [2024-11-25 10:32:58.936307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.641 [2024-11-25 10:32:58.936318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:04.641 [2024-11-25 10:32:58.936328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:04.641 [2024-11-25 10:32:58.936338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.641 [2024-11-25 10:32:58.936361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:04.641 [2024-11-25 10:32:58.936373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:04.641 [2024-11-25 10:32:58.936383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.641 [2024-11-25 10:32:58.936394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:04.641 [2024-11-25 10:32:58.936405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:04.641 [2024-11-25 10:32:58.936416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.641 [2024-11-25 10:32:58.936427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:04.641 [2024-11-25 10:32:58.936437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:04.641 [2024-11-25 10:32:58.936448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.642 [2024-11-25 10:32:58.936460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:04.642 [2024-11-25 10:32:58.936471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:04.642 [2024-11-25 10:32:58.936482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.642 [2024-11-25 10:32:58.936493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:04.642 [2024-11-25 10:32:58.936503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:04.642 [2024-11-25 10:32:58.936514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.642 [2024-11-25 10:32:58.936524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:04.642 [2024-11-25 10:32:58.936535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:04.642 [2024-11-25 10:32:58.936546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.642 [2024-11-25 10:32:58.936557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:04.642 [2024-11-25 10:32:58.936567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:04.642 [2024-11-25 10:32:58.936578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.642 [2024-11-25 10:32:58.936588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:04.642 [2024-11-25 10:32:58.936599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:04.642 [2024-11-25 10:32:58.936608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.642 [2024-11-25 10:32:58.936618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:04.642 [2024-11-25 10:32:58.936629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:04.642 [2024-11-25 10:32:58.936639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.642 [2024-11-25 10:32:58.936650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:04.642 [2024-11-25 10:32:58.936660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:04.642 [2024-11-25 10:32:58.936672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.642 [2024-11-25 10:32:58.936683] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:04.642 [2024-11-25 10:32:58.936694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:04.642 [2024-11-25 10:32:58.936706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.642 [2024-11-25 10:32:58.936723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.642 [2024-11-25 10:32:58.936734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:04.642 [2024-11-25 10:32:58.936745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:04.642 [2024-11-25 10:32:58.936756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:04.642 [2024-11-25 10:32:58.936767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:04.642 [2024-11-25 10:32:58.936803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:04.642 [2024-11-25 10:32:58.936815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:04.642 [2024-11-25 10:32:58.936828] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:04.642 [2024-11-25 10:32:58.936856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.642 [2024-11-25 10:32:58.936871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:04.642 [2024-11-25 10:32:58.936883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:04.642 [2024-11-25 10:32:58.936895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:04.642 [2024-11-25 10:32:58.936907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:04.642 [2024-11-25 10:32:58.936919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:04.642 [2024-11-25 10:32:58.936931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:04.642 [2024-11-25 10:32:58.936942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:04.642 [2024-11-25 10:32:58.936953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:04.642 [2024-11-25 10:32:58.936964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:04.642 [2024-11-25 10:32:58.936980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:04.642 [2024-11-25 10:32:58.936991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:04.642 [2024-11-25 10:32:58.937011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:04.642 [2024-11-25 10:32:58.937023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:04.642 [2024-11-25 10:32:58.937034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:04.642 [2024-11-25 10:32:58.937046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:04.642 [2024-11-25 10:32:58.937060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.642 [2024-11-25 10:32:58.937073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:04.642 [2024-11-25 10:32:58.937085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:04.642 [2024-11-25 10:32:58.937098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:04.642 [2024-11-25 10:32:58.937109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:04.642 [2024-11-25 10:32:58.937122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.642 [2024-11-25 10:32:58.937133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:04.642 [2024-11-25 10:32:58.937152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:27:04.642 [2024-11-25 10:32:58.937164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:58.977459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:58.977532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:04.901 [2024-11-25 10:32:58.977555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.214 ms 00:27:04.901 [2024-11-25 10:32:58.977568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:58.977801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:58.977832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:04.901 [2024-11-25 10:32:58.977848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:04.901 [2024-11-25 10:32:58.977860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.035508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.035596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:04.901 [2024-11-25 10:32:59.035621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.611 ms 00:27:04.901 [2024-11-25 10:32:59.035641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.035858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.035881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:04.901 [2024-11-25 10:32:59.035896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:04.901 [2024-11-25 10:32:59.035908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.036495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.036522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:04.901 [2024-11-25 10:32:59.036537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:27:04.901 [2024-11-25 10:32:59.036558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.036740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.036761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:04.901 [2024-11-25 10:32:59.036790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:27:04.901 [2024-11-25 10:32:59.036804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.056915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.056980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:04.901 [2024-11-25 10:32:59.057001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.076 ms 00:27:04.901 [2024-11-25 10:32:59.057014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.073982] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:04.901 [2024-11-25 10:32:59.074030] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:04.901 [2024-11-25 10:32:59.074051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.074066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:04.901 [2024-11-25 10:32:59.074079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.851 ms 00:27:04.901 [2024-11-25 10:32:59.074091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.103526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.103598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:04.901 [2024-11-25 10:32:59.103618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.330 ms 00:27:04.901 [2024-11-25 10:32:59.103631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.119392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.119582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:04.901 [2024-11-25 10:32:59.119612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.646 ms 00:27:04.901 [2024-11-25 10:32:59.119626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.134916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.134965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:04.901 [2024-11-25 10:32:59.134983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.187 ms 00:27:04.901 [2024-11-25 10:32:59.134995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.135945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.135983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:04.901 [2024-11-25 10:32:59.136000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:27:04.901 [2024-11-25 10:32:59.136012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.216329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.901 [2024-11-25 10:32:59.216436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:04.901 [2024-11-25 10:32:59.216460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.256 ms 00:27:04.901 [2024-11-25 10:32:59.216473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.901 [2024-11-25 10:32:59.229649] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:05.159 [2024-11-25 10:32:59.251332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.159 [2024-11-25 10:32:59.251422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:05.159 [2024-11-25 10:32:59.251445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.677 ms 00:27:05.159 [2024-11-25 10:32:59.251458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.159 [2024-11-25 10:32:59.251658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.159 [2024-11-25 10:32:59.251678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:05.159 [2024-11-25 10:32:59.251693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:05.159 [2024-11-25 10:32:59.251705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.159 [2024-11-25 10:32:59.251818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.159 [2024-11-25 10:32:59.251841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:05.159 [2024-11-25 10:32:59.251855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:05.159 [2024-11-25 10:32:59.251866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.159 [2024-11-25 10:32:59.251914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.159 [2024-11-25 10:32:59.251936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:05.159 [2024-11-25 10:32:59.251948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:05.159 [2024-11-25 10:32:59.251959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.159 [2024-11-25 10:32:59.252009] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:05.159 [2024-11-25 10:32:59.252028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.159 [2024-11-25 10:32:59.252040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:05.159 [2024-11-25 10:32:59.252052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:05.159 [2024-11-25 10:32:59.252063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.159 [2024-11-25 10:32:59.284011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.159 [2024-11-25 10:32:59.284087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:05.159 [2024-11-25 10:32:59.284111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.917 ms 00:27:05.159 [2024-11-25 10:32:59.284123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.159 [2024-11-25 10:32:59.284299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.159 [2024-11-25 10:32:59.284321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:05.159 [2024-11-25 10:32:59.284336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:05.159 [2024-11-25 10:32:59.284348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.159 [2024-11-25 10:32:59.285728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:05.159 [2024-11-25 10:32:59.289983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.466 ms, result 0 00:27:05.159 [2024-11-25 10:32:59.290814] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:05.159 [2024-11-25 10:32:59.306950] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:06.093  [2024-11-25T10:33:01.358Z] Copying: 26/256 [MB] (26 MBps) [2024-11-25T10:33:02.731Z] Copying: 50/256 [MB] (24 MBps) [2024-11-25T10:33:03.667Z] Copying: 75/256 [MB] (24 MBps) [2024-11-25T10:33:04.601Z] Copying: 97/256 [MB] (21 MBps) [2024-11-25T10:33:05.535Z] Copying: 120/256 [MB] (22 MBps) [2024-11-25T10:33:06.469Z] Copying: 142/256 [MB] (22 MBps) [2024-11-25T10:33:07.400Z] Copying: 164/256 [MB] (21 MBps) [2024-11-25T10:33:08.407Z] Copying: 187/256 [MB] (23 MBps) [2024-11-25T10:33:09.368Z] Copying: 210/256 [MB] (23 MBps) [2024-11-25T10:33:10.303Z] Copying: 234/256 [MB] (23 MBps) [2024-11-25T10:33:10.303Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-25 10:33:10.228400] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:15.970 [2024-11-25 10:33:10.240942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.970 [2024-11-25 10:33:10.241001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:15.970 [2024-11-25 10:33:10.241024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:15.970 [2024-11-25 10:33:10.241049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.970 [2024-11-25 10:33:10.241084] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:15.970 [2024-11-25 10:33:10.244688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.970 [2024-11-25 10:33:10.244723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:15.970 [2024-11-25 10:33:10.244739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:27:15.970 [2024-11-25 10:33:10.244751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.970 [2024-11-25 10:33:10.245053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.970 [2024-11-25 10:33:10.245074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:15.970 [2024-11-25 10:33:10.245087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:27:15.970 [2024-11-25 10:33:10.245098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.970 [2024-11-25 10:33:10.248724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.970 [2024-11-25 10:33:10.248761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:15.970 [2024-11-25 10:33:10.248791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.603 ms 00:27:15.970 [2024-11-25 10:33:10.248804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.970 [2024-11-25 10:33:10.256039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.970 [2024-11-25 10:33:10.256073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:15.970 [2024-11-25 10:33:10.256089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.211 ms 00:27:15.970 [2024-11-25 10:33:10.256100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.970 [2024-11-25 10:33:10.286172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.970 [2024-11-25 10:33:10.286216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:15.970 [2024-11-25 10:33:10.286235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.977 ms 00:27:15.971 [2024-11-25 10:33:10.286247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.231 [2024-11-25 10:33:10.304499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.231 [2024-11-25 10:33:10.304553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:16.231 [2024-11-25 10:33:10.304572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.187 ms 00:27:16.231 [2024-11-25 10:33:10.304590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.231 [2024-11-25 10:33:10.304753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.231 [2024-11-25 10:33:10.304793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:16.231 [2024-11-25 10:33:10.304811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:16.231 [2024-11-25 10:33:10.304822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.231 [2024-11-25 10:33:10.335370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.231 [2024-11-25 10:33:10.335537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:16.231 [2024-11-25 10:33:10.335565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.505 ms 00:27:16.231 [2024-11-25 10:33:10.335577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.231 [2024-11-25 10:33:10.366033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.231 [2024-11-25 10:33:10.366195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:16.231 [2024-11-25 10:33:10.366225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.386 ms 00:27:16.231 [2024-11-25 10:33:10.366237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.231 [2024-11-25 10:33:10.396112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.231 [2024-11-25 10:33:10.396156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:16.231 [2024-11-25 10:33:10.396173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.787 ms 00:27:16.231 [2024-11-25 10:33:10.396185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.231 [2024-11-25 10:33:10.426136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.231 [2024-11-25 10:33:10.426178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:16.231 [2024-11-25 10:33:10.426195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.851 ms 00:27:16.231 [2024-11-25 10:33:10.426206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.231 [2024-11-25 10:33:10.426274] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:16.231 [2024-11-25 10:33:10.426300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:16.231 [2024-11-25 10:33:10.426314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:16.231 [2024-11-25 10:33:10.426326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:16.231 [2024-11-25 10:33:10.426338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.426995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:16.232 [2024-11-25 10:33:10.427430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:16.233 [2024-11-25 10:33:10.427441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:16.233 [2024-11-25 10:33:10.427453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:16.233 [2024-11-25 10:33:10.427479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:16.233 [2024-11-25 10:33:10.427493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:16.233 [2024-11-25 10:33:10.427504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:16.233 [2024-11-25 10:33:10.427516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:16.233 [2024-11-25 10:33:10.427528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:16.233 [2024-11-25 10:33:10.427548] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:16.233 [2024-11-25 10:33:10.427560] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 320fbe64-4a13-4fdf-8f16-2944badb2627 00:27:16.233 [2024-11-25 10:33:10.427572] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:16.233 [2024-11-25 10:33:10.427582] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:16.233 [2024-11-25 10:33:10.427593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:16.233 [2024-11-25 10:33:10.427604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:16.233 [2024-11-25 10:33:10.427614] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:16.233 [2024-11-25 10:33:10.427625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:16.233 [2024-11-25 10:33:10.427636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:16.233 [2024-11-25 10:33:10.427645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:16.233 [2024-11-25 10:33:10.427655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:16.233 [2024-11-25 10:33:10.427666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.233 [2024-11-25 10:33:10.427683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:16.233 [2024-11-25 10:33:10.427696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:27:16.233 [2024-11-25 10:33:10.427707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.233 [2024-11-25 10:33:10.444582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.233 [2024-11-25 10:33:10.444624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:16.233 [2024-11-25 10:33:10.444642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.848 ms 00:27:16.233 [2024-11-25 10:33:10.444654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.233 [2024-11-25 10:33:10.445181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.233 [2024-11-25 10:33:10.445206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:16.233 [2024-11-25 10:33:10.445220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:27:16.233 [2024-11-25 10:33:10.445231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.233 [2024-11-25 10:33:10.492372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.233 [2024-11-25 10:33:10.492567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:16.233 [2024-11-25 10:33:10.492596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.233 [2024-11-25 10:33:10.492610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.233 [2024-11-25 10:33:10.492757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.233 [2024-11-25 10:33:10.492797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:16.233 [2024-11-25 10:33:10.492813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.233 [2024-11-25 10:33:10.492824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.233 [2024-11-25 10:33:10.492907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.233 [2024-11-25 10:33:10.492928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:16.233 [2024-11-25 10:33:10.492941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.233 [2024-11-25 10:33:10.492952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.233 [2024-11-25 10:33:10.492979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.233 [2024-11-25 10:33:10.492999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:16.233 [2024-11-25 10:33:10.493011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.233 [2024-11-25 10:33:10.493022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.491 [2024-11-25 10:33:10.600850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.491 [2024-11-25 10:33:10.600926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:16.491 [2024-11-25 10:33:10.600946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.491 [2024-11-25 10:33:10.600958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.491 [2024-11-25 10:33:10.686619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.491 [2024-11-25 10:33:10.686702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:16.491 [2024-11-25 10:33:10.686722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.491 [2024-11-25 10:33:10.686734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.491 [2024-11-25 10:33:10.686842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.492 [2024-11-25 10:33:10.686864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.492 [2024-11-25 10:33:10.686877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.492 [2024-11-25 10:33:10.686889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.492 [2024-11-25 10:33:10.686928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.492 [2024-11-25 10:33:10.686941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.492 [2024-11-25 10:33:10.686968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.492 [2024-11-25 10:33:10.686980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.492 [2024-11-25 10:33:10.687109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.492 [2024-11-25 10:33:10.687129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.492 [2024-11-25 10:33:10.687143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.492 [2024-11-25 10:33:10.687155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.492 [2024-11-25 10:33:10.687206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.492 [2024-11-25 10:33:10.687224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:16.492 [2024-11-25 10:33:10.687236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.492 [2024-11-25 10:33:10.687261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.492 [2024-11-25 10:33:10.687313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.492 [2024-11-25 10:33:10.687330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.492 [2024-11-25 10:33:10.687342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.492 [2024-11-25 10:33:10.687354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.492 [2024-11-25 10:33:10.687411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.492 [2024-11-25 10:33:10.687428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.492 [2024-11-25 10:33:10.687454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.492 [2024-11-25 10:33:10.687466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.492 [2024-11-25 10:33:10.687657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 446.696 ms, result 0 00:27:17.426 00:27:17.426 00:27:17.426 10:33:11 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:27:17.426 10:33:11 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:18.008 10:33:12 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:18.266 [2024-11-25 10:33:12.369255] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:27:18.266 [2024-11-25 10:33:12.370081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78923 ] 00:27:18.266 [2024-11-25 10:33:12.563714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.524 [2024-11-25 10:33:12.715283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.782 [2024-11-25 10:33:13.070560] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.782 [2024-11-25 10:33:13.070656] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:19.041 [2024-11-25 10:33:13.235330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.041 [2024-11-25 10:33:13.235585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:19.041 [2024-11-25 10:33:13.235619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:19.041 [2024-11-25 10:33:13.235633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.041 [2024-11-25 10:33:13.239102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.041 [2024-11-25 10:33:13.239146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:19.041 [2024-11-25 10:33:13.239165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.434 ms 00:27:19.041 [2024-11-25 10:33:13.239176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.041 [2024-11-25 10:33:13.239310] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:19.041 [2024-11-25 10:33:13.240228] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:19.041 [2024-11-25 10:33:13.240270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.041 [2024-11-25 10:33:13.240286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:19.041 [2024-11-25 10:33:13.240298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:27:19.041 [2024-11-25 10:33:13.240309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.041 [2024-11-25 10:33:13.242326] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:19.041 [2024-11-25 10:33:13.259669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.041 [2024-11-25 10:33:13.259736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:19.041 [2024-11-25 10:33:13.259755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.345 ms 00:27:19.041 [2024-11-25 10:33:13.259784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.041 [2024-11-25 10:33:13.259930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.041 [2024-11-25 10:33:13.259953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:19.041 [2024-11-25 10:33:13.259966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:19.041 [2024-11-25 10:33:13.259978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.041 [2024-11-25 10:33:13.268490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.041 [2024-11-25 10:33:13.268536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:19.041 [2024-11-25 10:33:13.268553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.453 ms 00:27:19.041 [2024-11-25 10:33:13.268565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.041 [2024-11-25 10:33:13.268697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.041 [2024-11-25 10:33:13.268719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:19.041 [2024-11-25 10:33:13.268732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:19.041 [2024-11-25 10:33:13.268744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.041 [2024-11-25 10:33:13.268806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.041 [2024-11-25 10:33:13.268830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:19.041 [2024-11-25 10:33:13.268843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:19.041 [2024-11-25 10:33:13.268855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.041 [2024-11-25 10:33:13.268895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:19.041 [2024-11-25 10:33:13.273924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.041 [2024-11-25 10:33:13.273960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:19.042 [2024-11-25 10:33:13.273992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.040 ms 00:27:19.042 [2024-11-25 10:33:13.274003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.042 [2024-11-25 10:33:13.274090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.042 [2024-11-25 10:33:13.274110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:19.042 [2024-11-25 10:33:13.274123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:19.042 [2024-11-25 10:33:13.274135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.042 [2024-11-25 10:33:13.274165] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:19.042 [2024-11-25 10:33:13.274200] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:19.042 [2024-11-25 10:33:13.274244] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:19.042 [2024-11-25 10:33:13.274266] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:19.042 [2024-11-25 10:33:13.274387] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:19.042 [2024-11-25 10:33:13.274405] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:19.042 [2024-11-25 10:33:13.274421] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:19.042 [2024-11-25 10:33:13.274436] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:19.042 [2024-11-25 10:33:13.274457] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:19.042 [2024-11-25 10:33:13.274469] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:19.042 [2024-11-25 10:33:13.274480] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:19.042 [2024-11-25 10:33:13.274492] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:19.042 [2024-11-25 10:33:13.274503] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:19.042 [2024-11-25 10:33:13.274515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.042 [2024-11-25 10:33:13.274527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:19.042 [2024-11-25 10:33:13.274539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:27:19.042 [2024-11-25 10:33:13.274550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.042 [2024-11-25 10:33:13.274650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.042 [2024-11-25 10:33:13.274667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:19.042 [2024-11-25 10:33:13.274687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:19.042 [2024-11-25 10:33:13.274698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.042 [2024-11-25 10:33:13.274834] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:19.042 [2024-11-25 10:33:13.274856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:19.042 [2024-11-25 10:33:13.274879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:19.042 [2024-11-25 10:33:13.274891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.042 [2024-11-25 10:33:13.274903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:19.042 [2024-11-25 10:33:13.274913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:19.042 [2024-11-25 10:33:13.274924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:19.042 [2024-11-25 10:33:13.274936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:19.042 [2024-11-25 10:33:13.274947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:19.042 [2024-11-25 10:33:13.274958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:19.042 [2024-11-25 10:33:13.274968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:19.042 [2024-11-25 10:33:13.274978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:19.042 [2024-11-25 10:33:13.274988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:19.042 [2024-11-25 10:33:13.275013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:19.042 [2024-11-25 10:33:13.275024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:19.042 [2024-11-25 10:33:13.275037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:19.042 [2024-11-25 10:33:13.275059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:19.042 [2024-11-25 10:33:13.275069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:19.042 [2024-11-25 10:33:13.275091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.042 [2024-11-25 10:33:13.275111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:19.042 [2024-11-25 10:33:13.275121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.042 [2024-11-25 10:33:13.275142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:19.042 [2024-11-25 10:33:13.275152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.042 [2024-11-25 10:33:13.275172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:19.042 [2024-11-25 10:33:13.275183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.042 [2024-11-25 10:33:13.275202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:19.042 [2024-11-25 10:33:13.275212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:19.042 [2024-11-25 10:33:13.275233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:19.042 [2024-11-25 10:33:13.275250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:19.042 [2024-11-25 10:33:13.275260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:19.042 [2024-11-25 10:33:13.275270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:19.042 [2024-11-25 10:33:13.275280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:19.042 [2024-11-25 10:33:13.275290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:19.042 [2024-11-25 10:33:13.275310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:19.042 [2024-11-25 10:33:13.275320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275330] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:19.042 [2024-11-25 10:33:13.275341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:19.042 [2024-11-25 10:33:13.275353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:19.042 [2024-11-25 10:33:13.275368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.042 [2024-11-25 10:33:13.275381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:19.042 [2024-11-25 10:33:13.275392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:19.042 [2024-11-25 10:33:13.275402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:19.042 [2024-11-25 10:33:13.275413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:19.042 [2024-11-25 10:33:13.275423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:19.042 [2024-11-25 10:33:13.275433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:19.042 [2024-11-25 10:33:13.275445] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:19.042 [2024-11-25 10:33:13.275463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:19.042 [2024-11-25 10:33:13.275476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:19.042 [2024-11-25 10:33:13.275487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:19.042 [2024-11-25 10:33:13.275498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:19.042 [2024-11-25 10:33:13.275509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:19.042 [2024-11-25 10:33:13.275519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:19.042 [2024-11-25 10:33:13.275529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:19.042 [2024-11-25 10:33:13.275540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:19.042 [2024-11-25 10:33:13.275551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:19.042 [2024-11-25 10:33:13.275561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:19.042 [2024-11-25 10:33:13.275572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:19.042 [2024-11-25 10:33:13.275583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:19.042 [2024-11-25 10:33:13.275593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:19.042 [2024-11-25 10:33:13.275604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:19.042 [2024-11-25 10:33:13.275615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:19.042 [2024-11-25 10:33:13.275625] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:19.042 [2024-11-25 10:33:13.275637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:19.042 [2024-11-25 10:33:13.275649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:19.043 [2024-11-25 10:33:13.275660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:19.043 [2024-11-25 10:33:13.275672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:19.043 [2024-11-25 10:33:13.275683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:19.043 [2024-11-25 10:33:13.275695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.043 [2024-11-25 10:33:13.275706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:19.043 [2024-11-25 10:33:13.275746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:27:19.043 [2024-11-25 10:33:13.275758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.043 [2024-11-25 10:33:13.316482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.043 [2024-11-25 10:33:13.316548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.043 [2024-11-25 10:33:13.316570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.632 ms 00:27:19.043 [2024-11-25 10:33:13.316582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.043 [2024-11-25 10:33:13.316803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.043 [2024-11-25 10:33:13.316832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:19.043 [2024-11-25 10:33:13.316846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:19.043 [2024-11-25 10:33:13.316858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.043 [2024-11-25 10:33:13.371680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.301 [2024-11-25 10:33:13.371946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.301 [2024-11-25 10:33:13.371978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.786 ms 00:27:19.301 [2024-11-25 10:33:13.372001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.301 [2024-11-25 10:33:13.372174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.301 [2024-11-25 10:33:13.372195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.301 [2024-11-25 10:33:13.372209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:19.301 [2024-11-25 10:33:13.372221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.301 [2024-11-25 10:33:13.372801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.301 [2024-11-25 10:33:13.372825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.301 [2024-11-25 10:33:13.372839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:27:19.301 [2024-11-25 10:33:13.372859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.301 [2024-11-25 10:33:13.373035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.301 [2024-11-25 10:33:13.373055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.301 [2024-11-25 10:33:13.373068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:27:19.301 [2024-11-25 10:33:13.373079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.301 [2024-11-25 10:33:13.393315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.301 [2024-11-25 10:33:13.393366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.301 [2024-11-25 10:33:13.393385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.205 ms 00:27:19.301 [2024-11-25 10:33:13.393397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.410311] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:19.302 [2024-11-25 10:33:13.410356] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:19.302 [2024-11-25 10:33:13.410383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.410397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:19.302 [2024-11-25 10:33:13.410410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.832 ms 00:27:19.302 [2024-11-25 10:33:13.410421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.440550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.440620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:19.302 [2024-11-25 10:33:13.440654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.025 ms 00:27:19.302 [2024-11-25 10:33:13.440666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.456665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.456708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:19.302 [2024-11-25 10:33:13.456741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.905 ms 00:27:19.302 [2024-11-25 10:33:13.456752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.472494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.472549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:19.302 [2024-11-25 10:33:13.472582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.637 ms 00:27:19.302 [2024-11-25 10:33:13.472593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.473511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.473545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:19.302 [2024-11-25 10:33:13.473560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:27:19.302 [2024-11-25 10:33:13.473572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.552710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.552796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:19.302 [2024-11-25 10:33:13.552819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.102 ms 00:27:19.302 [2024-11-25 10:33:13.552832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.565584] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:19.302 [2024-11-25 10:33:13.586616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.586685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:19.302 [2024-11-25 10:33:13.586706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.630 ms 00:27:19.302 [2024-11-25 10:33:13.586718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.586913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.586934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:19.302 [2024-11-25 10:33:13.586949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:19.302 [2024-11-25 10:33:13.586960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.587037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.587055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:19.302 [2024-11-25 10:33:13.587067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:19.302 [2024-11-25 10:33:13.587079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.587120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.587140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:19.302 [2024-11-25 10:33:13.587152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:19.302 [2024-11-25 10:33:13.587163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.587205] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:19.302 [2024-11-25 10:33:13.587222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.587233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:19.302 [2024-11-25 10:33:13.587245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:19.302 [2024-11-25 10:33:13.587256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.618828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.619020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:19.302 [2024-11-25 10:33:13.619049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.539 ms 00:27:19.302 [2024-11-25 10:33:13.619064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.619207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.302 [2024-11-25 10:33:13.619229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:19.302 [2024-11-25 10:33:13.619242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:19.302 [2024-11-25 10:33:13.619254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.302 [2024-11-25 10:33:13.620381] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:19.302 [2024-11-25 10:33:13.624397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.716 ms, result 0 00:27:19.302 [2024-11-25 10:33:13.625226] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:19.561 [2024-11-25 10:33:13.641356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:19.561  [2024-11-25T10:33:13.894Z] Copying: 4096/4096 [kB] (average 24 MBps)[2024-11-25 10:33:13.807549] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:19.561 [2024-11-25 10:33:13.820292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.561 [2024-11-25 10:33:13.820345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:19.561 [2024-11-25 10:33:13.820365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:19.561 [2024-11-25 10:33:13.820384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.561 [2024-11-25 10:33:13.820416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:19.561 [2024-11-25 10:33:13.824188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.561 [2024-11-25 10:33:13.824218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:19.561 [2024-11-25 10:33:13.824248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.752 ms 00:27:19.561 [2024-11-25 10:33:13.824259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.561 [2024-11-25 10:33:13.825811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.561 [2024-11-25 10:33:13.825845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:19.561 [2024-11-25 10:33:13.825860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:27:19.561 [2024-11-25 10:33:13.825872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.561 [2024-11-25 10:33:13.829945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.561 [2024-11-25 10:33:13.829985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:19.561 [2024-11-25 10:33:13.830000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.049 ms 00:27:19.561 [2024-11-25 10:33:13.830011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.561 [2024-11-25 10:33:13.837473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.561 [2024-11-25 10:33:13.837503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:19.561 [2024-11-25 10:33:13.837517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.421 ms 00:27:19.561 [2024-11-25 10:33:13.837528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.561 [2024-11-25 10:33:13.868669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.561 [2024-11-25 10:33:13.868708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:19.561 [2024-11-25 10:33:13.868725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.069 ms 00:27:19.561 [2024-11-25 10:33:13.868736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.561 [2024-11-25 10:33:13.886344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.561 [2024-11-25 10:33:13.886401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:19.561 [2024-11-25 10:33:13.886423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.532 ms 00:27:19.561 [2024-11-25 10:33:13.886435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.561 [2024-11-25 10:33:13.886607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.561 [2024-11-25 10:33:13.886628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:19.561 [2024-11-25 10:33:13.886642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:19.561 [2024-11-25 10:33:13.886653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.821 [2024-11-25 10:33:13.917622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.821 [2024-11-25 10:33:13.917663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:19.821 [2024-11-25 10:33:13.917680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.929 ms 00:27:19.821 [2024-11-25 10:33:13.917692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.821 [2024-11-25 10:33:13.947883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.821 [2024-11-25 10:33:13.947922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:19.821 [2024-11-25 10:33:13.947937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.127 ms 00:27:19.821 [2024-11-25 10:33:13.947948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.821 [2024-11-25 10:33:13.978054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.821 [2024-11-25 10:33:13.978092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:19.821 [2024-11-25 10:33:13.978108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.042 ms 00:27:19.821 [2024-11-25 10:33:13.978119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.821 [2024-11-25 10:33:14.008182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.821 [2024-11-25 10:33:14.008218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:19.821 [2024-11-25 10:33:14.008234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.957 ms 00:27:19.821 [2024-11-25 10:33:14.008246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.821 [2024-11-25 10:33:14.008313] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:19.821 [2024-11-25 10:33:14.008337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:19.821 [2024-11-25 10:33:14.008529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.008993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:19.822 [2024-11-25 10:33:14.009403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:19.823 [2024-11-25 10:33:14.009587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:19.823 [2024-11-25 10:33:14.009598] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 320fbe64-4a13-4fdf-8f16-2944badb2627 00:27:19.823 [2024-11-25 10:33:14.009610] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:19.823 [2024-11-25 10:33:14.009622] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:19.823 [2024-11-25 10:33:14.009633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:19.823 [2024-11-25 10:33:14.009644] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:19.823 [2024-11-25 10:33:14.009655] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:19.823 [2024-11-25 10:33:14.009667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:19.823 [2024-11-25 10:33:14.009678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:19.823 [2024-11-25 10:33:14.009688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:19.823 [2024-11-25 10:33:14.009698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:19.823 [2024-11-25 10:33:14.009709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.823 [2024-11-25 10:33:14.009726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:19.823 [2024-11-25 10:33:14.009739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:27:19.823 [2024-11-25 10:33:14.009751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.823 [2024-11-25 10:33:14.026595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.823 [2024-11-25 10:33:14.026642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:19.823 [2024-11-25 10:33:14.026658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.805 ms 00:27:19.823 [2024-11-25 10:33:14.026669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.823 [2024-11-25 10:33:14.027176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.823 [2024-11-25 10:33:14.027198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:19.823 [2024-11-25 10:33:14.027213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:27:19.823 [2024-11-25 10:33:14.027224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.823 [2024-11-25 10:33:14.075078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.823 [2024-11-25 10:33:14.075134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.823 [2024-11-25 10:33:14.075152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.823 [2024-11-25 10:33:14.075164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.823 [2024-11-25 10:33:14.075281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.823 [2024-11-25 10:33:14.075297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.823 [2024-11-25 10:33:14.075310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.823 [2024-11-25 10:33:14.075321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.823 [2024-11-25 10:33:14.075386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.823 [2024-11-25 10:33:14.075405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.823 [2024-11-25 10:33:14.075418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.823 [2024-11-25 10:33:14.075429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.823 [2024-11-25 10:33:14.075455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.823 [2024-11-25 10:33:14.075475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.823 [2024-11-25 10:33:14.075487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.823 [2024-11-25 10:33:14.075498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.082 [2024-11-25 10:33:14.186543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.082 [2024-11-25 10:33:14.186614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:20.082 [2024-11-25 10:33:14.186634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.082 [2024-11-25 10:33:14.186645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.082 [2024-11-25 10:33:14.273148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.082 [2024-11-25 10:33:14.273219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:20.082 [2024-11-25 10:33:14.273239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.082 [2024-11-25 10:33:14.273251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.082 [2024-11-25 10:33:14.273350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.082 [2024-11-25 10:33:14.273368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:20.082 [2024-11-25 10:33:14.273381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.082 [2024-11-25 10:33:14.273392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.082 [2024-11-25 10:33:14.273430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.082 [2024-11-25 10:33:14.273443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:20.082 [2024-11-25 10:33:14.273464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.082 [2024-11-25 10:33:14.273475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.082 [2024-11-25 10:33:14.273603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.082 [2024-11-25 10:33:14.273622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:20.082 [2024-11-25 10:33:14.273634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.082 [2024-11-25 10:33:14.273646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.082 [2024-11-25 10:33:14.273697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.082 [2024-11-25 10:33:14.273716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:20.082 [2024-11-25 10:33:14.273728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.082 [2024-11-25 10:33:14.273745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.082 [2024-11-25 10:33:14.273830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.082 [2024-11-25 10:33:14.273849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:20.082 [2024-11-25 10:33:14.273863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.082 [2024-11-25 10:33:14.273875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.082 [2024-11-25 10:33:14.273935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.082 [2024-11-25 10:33:14.273952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:20.082 [2024-11-25 10:33:14.273971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.082 [2024-11-25 10:33:14.273982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.082 [2024-11-25 10:33:14.274157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 453.867 ms, result 0 00:27:21.017 00:27:21.017 00:27:21.017 10:33:15 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78959 00:27:21.017 10:33:15 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:21.017 10:33:15 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78959 00:27:21.017 10:33:15 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78959 ']' 00:27:21.017 10:33:15 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.017 10:33:15 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.017 10:33:15 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.017 10:33:15 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.017 10:33:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:21.276 [2024-11-25 10:33:15.413790] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:27:21.276 [2024-11-25 10:33:15.413967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78959 ] 00:27:21.276 [2024-11-25 10:33:15.597733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.534 [2024-11-25 10:33:15.751686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.470 10:33:16 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.470 10:33:16 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:22.470 10:33:16 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:27:22.729 [2024-11-25 10:33:16.915910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.729 [2024-11-25 10:33:16.915990] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.989 [2024-11-25 10:33:17.102346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.102428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:22.989 [2024-11-25 10:33:17.102458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:22.989 [2024-11-25 10:33:17.102472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.106621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.106668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.989 [2024-11-25 10:33:17.106691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:27:22.989 [2024-11-25 10:33:17.106704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.106868] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:22.989 [2024-11-25 10:33:17.107909] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:22.989 [2024-11-25 10:33:17.107949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.107963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.989 [2024-11-25 10:33:17.107978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:27:22.989 [2024-11-25 10:33:17.107989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.110040] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:22.989 [2024-11-25 10:33:17.127120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.127179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:22.989 [2024-11-25 10:33:17.127200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.086 ms 00:27:22.989 [2024-11-25 10:33:17.127219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.127348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.127378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:22.989 [2024-11-25 10:33:17.127394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:22.989 [2024-11-25 10:33:17.127411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.135929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.135996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.989 [2024-11-25 10:33:17.136013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.427 ms 00:27:22.989 [2024-11-25 10:33:17.136031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.136216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.136241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.989 [2024-11-25 10:33:17.136256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:27:22.989 [2024-11-25 10:33:17.136269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.136331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.136366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:22.989 [2024-11-25 10:33:17.136380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:22.989 [2024-11-25 10:33:17.136397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.136436] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:22.989 [2024-11-25 10:33:17.141394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.141432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.989 [2024-11-25 10:33:17.141456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.962 ms 00:27:22.989 [2024-11-25 10:33:17.141470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.141576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.141596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:22.989 [2024-11-25 10:33:17.141615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:22.989 [2024-11-25 10:33:17.141633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.141673] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:22.989 [2024-11-25 10:33:17.141707] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:22.989 [2024-11-25 10:33:17.141782] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:22.989 [2024-11-25 10:33:17.141811] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:22.989 [2024-11-25 10:33:17.141931] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:22.989 [2024-11-25 10:33:17.141966] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:22.989 [2024-11-25 10:33:17.141992] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:22.989 [2024-11-25 10:33:17.142014] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:22.989 [2024-11-25 10:33:17.142034] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:22.989 [2024-11-25 10:33:17.142047] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:22.989 [2024-11-25 10:33:17.142069] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:22.989 [2024-11-25 10:33:17.142081] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:22.989 [2024-11-25 10:33:17.142102] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:22.989 [2024-11-25 10:33:17.142116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.142133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:22.989 [2024-11-25 10:33:17.142146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:27:22.989 [2024-11-25 10:33:17.142162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.142267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.989 [2024-11-25 10:33:17.142288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:22.989 [2024-11-25 10:33:17.142301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:22.989 [2024-11-25 10:33:17.142318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.989 [2024-11-25 10:33:17.142443] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:22.989 [2024-11-25 10:33:17.142464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:22.989 [2024-11-25 10:33:17.142476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.989 [2024-11-25 10:33:17.142490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:22.989 [2024-11-25 10:33:17.142514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:22.989 [2024-11-25 10:33:17.142542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:22.989 [2024-11-25 10:33:17.142553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.989 [2024-11-25 10:33:17.142583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:22.989 [2024-11-25 10:33:17.142596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:22.989 [2024-11-25 10:33:17.142606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.989 [2024-11-25 10:33:17.142619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:22.989 [2024-11-25 10:33:17.142630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:22.989 [2024-11-25 10:33:17.142643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:22.989 [2024-11-25 10:33:17.142666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:22.989 [2024-11-25 10:33:17.142677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:22.989 [2024-11-25 10:33:17.142711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.989 [2024-11-25 10:33:17.142735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:22.989 [2024-11-25 10:33:17.142750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.989 [2024-11-25 10:33:17.142787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:22.989 [2024-11-25 10:33:17.142801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.989 [2024-11-25 10:33:17.142825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:22.989 [2024-11-25 10:33:17.142837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.989 [2024-11-25 10:33:17.142860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:22.989 [2024-11-25 10:33:17.142871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:22.989 [2024-11-25 10:33:17.142885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.989 [2024-11-25 10:33:17.142896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:22.989 [2024-11-25 10:33:17.142909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:22.989 [2024-11-25 10:33:17.142919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.989 [2024-11-25 10:33:17.142932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:22.990 [2024-11-25 10:33:17.142942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:22.990 [2024-11-25 10:33:17.142958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.990 [2024-11-25 10:33:17.142969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:22.990 [2024-11-25 10:33:17.142982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:22.990 [2024-11-25 10:33:17.142992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.990 [2024-11-25 10:33:17.143004] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:22.990 [2024-11-25 10:33:17.143017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:22.990 [2024-11-25 10:33:17.143033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.990 [2024-11-25 10:33:17.143044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.990 [2024-11-25 10:33:17.143057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:22.990 [2024-11-25 10:33:17.143069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:22.990 [2024-11-25 10:33:17.143082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:22.990 [2024-11-25 10:33:17.143092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:22.990 [2024-11-25 10:33:17.143105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:22.990 [2024-11-25 10:33:17.143116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:22.990 [2024-11-25 10:33:17.143131] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:22.990 [2024-11-25 10:33:17.143145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.990 [2024-11-25 10:33:17.143171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:22.990 [2024-11-25 10:33:17.143184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:22.990 [2024-11-25 10:33:17.143202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:22.990 [2024-11-25 10:33:17.143215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:22.990 [2024-11-25 10:33:17.143233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:22.990 [2024-11-25 10:33:17.143245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:22.990 [2024-11-25 10:33:17.143262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:22.990 [2024-11-25 10:33:17.143274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:22.990 [2024-11-25 10:33:17.143291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:22.990 [2024-11-25 10:33:17.143303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:22.990 [2024-11-25 10:33:17.143320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:22.990 [2024-11-25 10:33:17.143332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:22.990 [2024-11-25 10:33:17.143349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:22.990 [2024-11-25 10:33:17.143361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:22.990 [2024-11-25 10:33:17.143379] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:22.990 [2024-11-25 10:33:17.143393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.990 [2024-11-25 10:33:17.143428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:22.990 [2024-11-25 10:33:17.143442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:22.990 [2024-11-25 10:33:17.143459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:22.990 [2024-11-25 10:33:17.143473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:22.990 [2024-11-25 10:33:17.143491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.143504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:22.990 [2024-11-25 10:33:17.143522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:27:22.990 [2024-11-25 10:33:17.143534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.185461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.185529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.990 [2024-11-25 10:33:17.185559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.823 ms 00:27:22.990 [2024-11-25 10:33:17.185579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.185804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.185826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:22.990 [2024-11-25 10:33:17.185847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:22.990 [2024-11-25 10:33:17.185860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.233393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.233471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.990 [2024-11-25 10:33:17.233500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.484 ms 00:27:22.990 [2024-11-25 10:33:17.233514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.233689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.233710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.990 [2024-11-25 10:33:17.233730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:22.990 [2024-11-25 10:33:17.233744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.234341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.234386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.990 [2024-11-25 10:33:17.234411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:27:22.990 [2024-11-25 10:33:17.234423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.234616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.234647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.990 [2024-11-25 10:33:17.234668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:27:22.990 [2024-11-25 10:33:17.234680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.257143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.257207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.990 [2024-11-25 10:33:17.257235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.421 ms 00:27:22.990 [2024-11-25 10:33:17.257250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.274216] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:22.990 [2024-11-25 10:33:17.274261] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:22.990 [2024-11-25 10:33:17.274297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.274312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:22.990 [2024-11-25 10:33:17.274331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.848 ms 00:27:22.990 [2024-11-25 10:33:17.274344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.303445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.303494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:22.990 [2024-11-25 10:33:17.303517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.983 ms 00:27:22.990 [2024-11-25 10:33:17.303530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.990 [2024-11-25 10:33:17.318881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.990 [2024-11-25 10:33:17.318926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:22.990 [2024-11-25 10:33:17.318949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.237 ms 00:27:22.990 [2024-11-25 10:33:17.318961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.334197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.334242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:23.249 [2024-11-25 10:33:17.334263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.127 ms 00:27:23.249 [2024-11-25 10:33:17.334275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.335259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.335297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:23.249 [2024-11-25 10:33:17.335316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:27:23.249 [2024-11-25 10:33:17.335329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.418942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.419029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:23.249 [2024-11-25 10:33:17.419056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.573 ms 00:27:23.249 [2024-11-25 10:33:17.419070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.431571] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:23.249 [2024-11-25 10:33:17.452515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.452606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:23.249 [2024-11-25 10:33:17.452627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.286 ms 00:27:23.249 [2024-11-25 10:33:17.452642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.452844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.452869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:23.249 [2024-11-25 10:33:17.452883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:23.249 [2024-11-25 10:33:17.452897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.452974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.452994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:23.249 [2024-11-25 10:33:17.453008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:23.249 [2024-11-25 10:33:17.453026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.453061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.453077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:23.249 [2024-11-25 10:33:17.453090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:23.249 [2024-11-25 10:33:17.453108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.453154] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:23.249 [2024-11-25 10:33:17.453182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.453198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:23.249 [2024-11-25 10:33:17.453212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:23.249 [2024-11-25 10:33:17.453224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.484382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.484433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:23.249 [2024-11-25 10:33:17.484455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.114 ms 00:27:23.249 [2024-11-25 10:33:17.484468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.484618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.249 [2024-11-25 10:33:17.484639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:23.249 [2024-11-25 10:33:17.484659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:23.249 [2024-11-25 10:33:17.484671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.249 [2024-11-25 10:33:17.485844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:23.249 [2024-11-25 10:33:17.489961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.155 ms, result 0 00:27:23.249 [2024-11-25 10:33:17.491088] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:23.249 Some configs were skipped because the RPC state that can call them passed over. 00:27:23.249 10:33:17 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:27:23.508 [2024-11-25 10:33:17.836853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.508 [2024-11-25 10:33:17.836933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:23.508 [2024-11-25 10:33:17.836955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:27:23.508 [2024-11-25 10:33:17.836970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.508 [2024-11-25 10:33:17.837019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.637 ms, result 0 00:27:23.766 true 00:27:23.766 10:33:17 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:27:23.766 [2024-11-25 10:33:18.096719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.766 [2024-11-25 10:33:18.096790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:23.766 [2024-11-25 10:33:18.096816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:27:23.766 [2024-11-25 10:33:18.096829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.766 [2024-11-25 10:33:18.096882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.202 ms, result 0 00:27:24.024 true 00:27:24.024 10:33:18 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78959 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78959 ']' 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78959 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78959 00:27:24.024 killing process with pid 78959 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78959' 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78959 00:27:24.024 10:33:18 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78959 00:27:25.067 [2024-11-25 10:33:19.180326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.180406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:25.067 [2024-11-25 10:33:19.180428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:25.067 [2024-11-25 10:33:19.180442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.180477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:25.067 [2024-11-25 10:33:19.184174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.184241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:25.067 [2024-11-25 10:33:19.184269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.666 ms 00:27:25.067 [2024-11-25 10:33:19.184281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.184621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.184652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:25.067 [2024-11-25 10:33:19.184670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:27:25.067 [2024-11-25 10:33:19.184682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.188676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.188716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:25.067 [2024-11-25 10:33:19.188739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:27:25.067 [2024-11-25 10:33:19.188751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.196443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.196480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:25.067 [2024-11-25 10:33:19.196497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.632 ms 00:27:25.067 [2024-11-25 10:33:19.196508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.208833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.208877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:25.067 [2024-11-25 10:33:19.208899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.238 ms 00:27:25.067 [2024-11-25 10:33:19.208922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.218002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.218056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:25.067 [2024-11-25 10:33:19.218077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.007 ms 00:27:25.067 [2024-11-25 10:33:19.218090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.218272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.218294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.067 [2024-11-25 10:33:19.218310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:25.067 [2024-11-25 10:33:19.218321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.232558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.232607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:25.067 [2024-11-25 10:33:19.232628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.205 ms 00:27:25.067 [2024-11-25 10:33:19.232640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.245500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.245544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:25.067 [2024-11-25 10:33:19.245575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.754 ms 00:27:25.067 [2024-11-25 10:33:19.245588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.257856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.067 [2024-11-25 10:33:19.257897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.067 [2024-11-25 10:33:19.257925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.187 ms 00:27:25.067 [2024-11-25 10:33:19.257938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.067 [2024-11-25 10:33:19.269895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.068 [2024-11-25 10:33:19.269951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.068 [2024-11-25 10:33:19.269974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.841 ms 00:27:25.068 [2024-11-25 10:33:19.269986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.068 [2024-11-25 10:33:19.270054] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.068 [2024-11-25 10:33:19.270079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.270989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.068 [2024-11-25 10:33:19.271671] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.068 [2024-11-25 10:33:19.271694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 320fbe64-4a13-4fdf-8f16-2944badb2627 00:27:25.068 [2024-11-25 10:33:19.271728] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:25.068 [2024-11-25 10:33:19.271747] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:25.068 [2024-11-25 10:33:19.271759] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:25.068 [2024-11-25 10:33:19.271788] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:25.068 [2024-11-25 10:33:19.271802] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.068 [2024-11-25 10:33:19.271819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.068 [2024-11-25 10:33:19.271833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.068 [2024-11-25 10:33:19.271848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.068 [2024-11-25 10:33:19.271859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.068 [2024-11-25 10:33:19.271877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.068 [2024-11-25 10:33:19.271889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.068 [2024-11-25 10:33:19.271907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:27:25.068 [2024-11-25 10:33:19.271925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.068 [2024-11-25 10:33:19.288980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.068 [2024-11-25 10:33:19.289025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.068 [2024-11-25 10:33:19.289055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.999 ms 00:27:25.068 [2024-11-25 10:33:19.289069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.068 [2024-11-25 10:33:19.289609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.068 [2024-11-25 10:33:19.289643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.068 [2024-11-25 10:33:19.289673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:27:25.068 [2024-11-25 10:33:19.289686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.068 [2024-11-25 10:33:19.350944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.068 [2024-11-25 10:33:19.351014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.068 [2024-11-25 10:33:19.351042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.068 [2024-11-25 10:33:19.351057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.068 [2024-11-25 10:33:19.351217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.068 [2024-11-25 10:33:19.351236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.068 [2024-11-25 10:33:19.351264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.068 [2024-11-25 10:33:19.351282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.068 [2024-11-25 10:33:19.351371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.068 [2024-11-25 10:33:19.351391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.069 [2024-11-25 10:33:19.351416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.069 [2024-11-25 10:33:19.351428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.069 [2024-11-25 10:33:19.351464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.069 [2024-11-25 10:33:19.351484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.069 [2024-11-25 10:33:19.351502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.069 [2024-11-25 10:33:19.351521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.326 [2024-11-25 10:33:19.464121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.326 [2024-11-25 10:33:19.464225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.326 [2024-11-25 10:33:19.464254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.326 [2024-11-25 10:33:19.464269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.327 [2024-11-25 10:33:19.552133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.327 [2024-11-25 10:33:19.552223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.327 [2024-11-25 10:33:19.552251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.327 [2024-11-25 10:33:19.552271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.327 [2024-11-25 10:33:19.552394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.327 [2024-11-25 10:33:19.552414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.327 [2024-11-25 10:33:19.552439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.327 [2024-11-25 10:33:19.552452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.327 [2024-11-25 10:33:19.552499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.327 [2024-11-25 10:33:19.552515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.327 [2024-11-25 10:33:19.552532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.327 [2024-11-25 10:33:19.552553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.327 [2024-11-25 10:33:19.552699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.327 [2024-11-25 10:33:19.552724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.327 [2024-11-25 10:33:19.552745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.327 [2024-11-25 10:33:19.552758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.327 [2024-11-25 10:33:19.552848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.327 [2024-11-25 10:33:19.552868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.327 [2024-11-25 10:33:19.552887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.327 [2024-11-25 10:33:19.552899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.327 [2024-11-25 10:33:19.552963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.327 [2024-11-25 10:33:19.552979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.327 [2024-11-25 10:33:19.553003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.327 [2024-11-25 10:33:19.553015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.327 [2024-11-25 10:33:19.553081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.327 [2024-11-25 10:33:19.553105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.327 [2024-11-25 10:33:19.553124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.327 [2024-11-25 10:33:19.553137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.327 [2024-11-25 10:33:19.553333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.976 ms, result 0 00:27:26.262 10:33:20 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:26.520 [2024-11-25 10:33:20.660941] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:27:26.520 [2024-11-25 10:33:20.661150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79023 ] 00:27:26.520 [2024-11-25 10:33:20.850664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.779 [2024-11-25 10:33:20.997615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.347 [2024-11-25 10:33:21.376363] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:27.347 [2024-11-25 10:33:21.376450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:27.347 [2024-11-25 10:33:21.556688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.556764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:27.347 [2024-11-25 10:33:21.556818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:27.347 [2024-11-25 10:33:21.556829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.560316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.560373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:27.347 [2024-11-25 10:33:21.560404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:27:27.347 [2024-11-25 10:33:21.560430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.560570] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:27.347 [2024-11-25 10:33:21.561581] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:27.347 [2024-11-25 10:33:21.561631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.561644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:27.347 [2024-11-25 10:33:21.561656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:27:27.347 [2024-11-25 10:33:21.561667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.563846] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:27.347 [2024-11-25 10:33:21.580365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.580460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:27.347 [2024-11-25 10:33:21.580494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.520 ms 00:27:27.347 [2024-11-25 10:33:21.580519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.580661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.580682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:27.347 [2024-11-25 10:33:21.580694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:27.347 [2024-11-25 10:33:21.580705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.589746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.589812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:27.347 [2024-11-25 10:33:21.589842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.952 ms 00:27:27.347 [2024-11-25 10:33:21.589853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.589977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.589997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:27.347 [2024-11-25 10:33:21.590009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:27.347 [2024-11-25 10:33:21.590035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.590097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.590135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:27.347 [2024-11-25 10:33:21.590148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:27.347 [2024-11-25 10:33:21.590160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.590197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:27.347 [2024-11-25 10:33:21.595293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.595345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:27.347 [2024-11-25 10:33:21.595361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.106 ms 00:27:27.347 [2024-11-25 10:33:21.595372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.595460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.595493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:27.347 [2024-11-25 10:33:21.595505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:27.347 [2024-11-25 10:33:21.595516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.595547] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:27.347 [2024-11-25 10:33:21.595611] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:27.347 [2024-11-25 10:33:21.595668] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:27.347 [2024-11-25 10:33:21.595687] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:27.347 [2024-11-25 10:33:21.595793] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:27.347 [2024-11-25 10:33:21.595825] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:27.347 [2024-11-25 10:33:21.595857] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:27.347 [2024-11-25 10:33:21.595876] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:27.347 [2024-11-25 10:33:21.595895] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:27.347 [2024-11-25 10:33:21.595908] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:27.347 [2024-11-25 10:33:21.595919] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:27.347 [2024-11-25 10:33:21.595930] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:27.347 [2024-11-25 10:33:21.595941] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:27.347 [2024-11-25 10:33:21.595953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.595964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:27.347 [2024-11-25 10:33:21.595976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:27:27.347 [2024-11-25 10:33:21.595987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.596088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.347 [2024-11-25 10:33:21.596104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:27.347 [2024-11-25 10:33:21.596121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:27.347 [2024-11-25 10:33:21.596132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.347 [2024-11-25 10:33:21.596250] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:27.347 [2024-11-25 10:33:21.596268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:27.347 [2024-11-25 10:33:21.596280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:27.347 [2024-11-25 10:33:21.596291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.347 [2024-11-25 10:33:21.596303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:27.348 [2024-11-25 10:33:21.596313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:27.348 [2024-11-25 10:33:21.596335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:27.348 [2024-11-25 10:33:21.596346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:27.348 [2024-11-25 10:33:21.596367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:27.348 [2024-11-25 10:33:21.596377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:27.348 [2024-11-25 10:33:21.596387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:27.348 [2024-11-25 10:33:21.596411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:27.348 [2024-11-25 10:33:21.596423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:27.348 [2024-11-25 10:33:21.596433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:27.348 [2024-11-25 10:33:21.596454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:27.348 [2024-11-25 10:33:21.596464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:27.348 [2024-11-25 10:33:21.596485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.348 [2024-11-25 10:33:21.596505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:27.348 [2024-11-25 10:33:21.596515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.348 [2024-11-25 10:33:21.596535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:27.348 [2024-11-25 10:33:21.596545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.348 [2024-11-25 10:33:21.596566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:27.348 [2024-11-25 10:33:21.596576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.348 [2024-11-25 10:33:21.596596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:27.348 [2024-11-25 10:33:21.596606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:27.348 [2024-11-25 10:33:21.596627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:27.348 [2024-11-25 10:33:21.596637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:27.348 [2024-11-25 10:33:21.596647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:27.348 [2024-11-25 10:33:21.596657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:27.348 [2024-11-25 10:33:21.596668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:27.348 [2024-11-25 10:33:21.596681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:27.348 [2024-11-25 10:33:21.596702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:27.348 [2024-11-25 10:33:21.596712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596723] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:27.348 [2024-11-25 10:33:21.596735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:27.348 [2024-11-25 10:33:21.596745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:27.348 [2024-11-25 10:33:21.596761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.348 [2024-11-25 10:33:21.596790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:27.348 [2024-11-25 10:33:21.596804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:27.348 [2024-11-25 10:33:21.596814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:27.348 [2024-11-25 10:33:21.596825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:27.348 [2024-11-25 10:33:21.596835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:27.348 [2024-11-25 10:33:21.596845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:27.348 [2024-11-25 10:33:21.596857] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:27.348 [2024-11-25 10:33:21.596871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.348 [2024-11-25 10:33:21.596895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:27.348 [2024-11-25 10:33:21.596907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:27.348 [2024-11-25 10:33:21.596918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:27.348 [2024-11-25 10:33:21.596931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:27.348 [2024-11-25 10:33:21.596942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:27.348 [2024-11-25 10:33:21.596954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:27.348 [2024-11-25 10:33:21.596965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:27.348 [2024-11-25 10:33:21.596976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:27.348 [2024-11-25 10:33:21.596987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:27.348 [2024-11-25 10:33:21.596998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:27.348 [2024-11-25 10:33:21.597009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:27.348 [2024-11-25 10:33:21.597034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:27.348 [2024-11-25 10:33:21.597045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:27.348 [2024-11-25 10:33:21.597056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:27.348 [2024-11-25 10:33:21.597067] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:27.348 [2024-11-25 10:33:21.597079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.348 [2024-11-25 10:33:21.597092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:27.348 [2024-11-25 10:33:21.597119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:27.348 [2024-11-25 10:33:21.597130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:27.348 [2024-11-25 10:33:21.597142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:27.348 [2024-11-25 10:33:21.597154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.348 [2024-11-25 10:33:21.597165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:27.348 [2024-11-25 10:33:21.597183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:27:27.348 [2024-11-25 10:33:21.597195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.348 [2024-11-25 10:33:21.638185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.348 [2024-11-25 10:33:21.638274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:27.348 [2024-11-25 10:33:21.638294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.911 ms 00:27:27.348 [2024-11-25 10:33:21.638307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.348 [2024-11-25 10:33:21.638514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.348 [2024-11-25 10:33:21.638542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:27.348 [2024-11-25 10:33:21.638557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:27.348 [2024-11-25 10:33:21.638568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.696809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.696879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:27.608 [2024-11-25 10:33:21.696900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.199 ms 00:27:27.608 [2024-11-25 10:33:21.696918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.697096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.697118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:27.608 [2024-11-25 10:33:21.697131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:27.608 [2024-11-25 10:33:21.697143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.697703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.697731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:27.608 [2024-11-25 10:33:21.697746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:27:27.608 [2024-11-25 10:33:21.697766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.697962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.697987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:27.608 [2024-11-25 10:33:21.698001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:27:27.608 [2024-11-25 10:33:21.698012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.717914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.717964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:27.608 [2024-11-25 10:33:21.717983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.870 ms 00:27:27.608 [2024-11-25 10:33:21.717995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.734814] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:27.608 [2024-11-25 10:33:21.734860] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:27.608 [2024-11-25 10:33:21.734879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.734892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:27.608 [2024-11-25 10:33:21.734911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.722 ms 00:27:27.608 [2024-11-25 10:33:21.734922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.764411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.764466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:27.608 [2024-11-25 10:33:21.764484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.392 ms 00:27:27.608 [2024-11-25 10:33:21.764496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.779974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.780015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:27.608 [2024-11-25 10:33:21.780032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.378 ms 00:27:27.608 [2024-11-25 10:33:21.780043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.795466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.795507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:27.608 [2024-11-25 10:33:21.795523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.330 ms 00:27:27.608 [2024-11-25 10:33:21.795534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.796455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.796486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:27.608 [2024-11-25 10:33:21.796501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:27:27.608 [2024-11-25 10:33:21.796512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.876352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.876429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:27.608 [2024-11-25 10:33:21.876450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.803 ms 00:27:27.608 [2024-11-25 10:33:21.876463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.889248] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:27.608 [2024-11-25 10:33:21.910188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.910261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:27.608 [2024-11-25 10:33:21.910283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.577 ms 00:27:27.608 [2024-11-25 10:33:21.910304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.910467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.910488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:27.608 [2024-11-25 10:33:21.910502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:27.608 [2024-11-25 10:33:21.910515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.910595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.910612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:27.608 [2024-11-25 10:33:21.910625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:27.608 [2024-11-25 10:33:21.910643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.910683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.910698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:27.608 [2024-11-25 10:33:21.910710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:27.608 [2024-11-25 10:33:21.910722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.608 [2024-11-25 10:33:21.910767] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:27.608 [2024-11-25 10:33:21.910812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.608 [2024-11-25 10:33:21.910824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:27.608 [2024-11-25 10:33:21.910837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:27.608 [2024-11-25 10:33:21.910849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.867 [2024-11-25 10:33:21.944828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.867 [2024-11-25 10:33:21.944899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:27.867 [2024-11-25 10:33:21.944920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.940 ms 00:27:27.867 [2024-11-25 10:33:21.944934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.867 [2024-11-25 10:33:21.945086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.867 [2024-11-25 10:33:21.945109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:27.867 [2024-11-25 10:33:21.945122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:27.867 [2024-11-25 10:33:21.945134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.867 [2024-11-25 10:33:21.946333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:27.867 [2024-11-25 10:33:21.950464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.252 ms, result 0 00:27:27.867 [2024-11-25 10:33:21.951229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:27.867 [2024-11-25 10:33:21.967413] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:28.805  [2024-11-25T10:33:24.073Z] Copying: 27/256 [MB] (27 MBps) [2024-11-25T10:33:25.053Z] Copying: 52/256 [MB] (25 MBps) [2024-11-25T10:33:26.436Z] Copying: 77/256 [MB] (25 MBps) [2024-11-25T10:33:27.367Z] Copying: 101/256 [MB] (23 MBps) [2024-11-25T10:33:28.349Z] Copying: 124/256 [MB] (22 MBps) [2024-11-25T10:33:29.282Z] Copying: 148/256 [MB] (23 MBps) [2024-11-25T10:33:30.216Z] Copying: 172/256 [MB] (24 MBps) [2024-11-25T10:33:31.150Z] Copying: 196/256 [MB] (24 MBps) [2024-11-25T10:33:32.084Z] Copying: 221/256 [MB] (24 MBps) [2024-11-25T10:33:32.651Z] Copying: 245/256 [MB] (24 MBps) [2024-11-25T10:33:32.910Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-25 10:33:32.780042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:38.577 [2024-11-25 10:33:32.794172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.577 [2024-11-25 10:33:32.794220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:38.577 [2024-11-25 10:33:32.794253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:38.577 [2024-11-25 10:33:32.794275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.577 [2024-11-25 10:33:32.794311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:38.577 [2024-11-25 10:33:32.798555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.577 [2024-11-25 10:33:32.798596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:38.578 [2024-11-25 10:33:32.798612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:27:38.578 [2024-11-25 10:33:32.798625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.578 [2024-11-25 10:33:32.798987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.578 [2024-11-25 10:33:32.799018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:38.578 [2024-11-25 10:33:32.799032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:27:38.578 [2024-11-25 10:33:32.799044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.578 [2024-11-25 10:33:32.802700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.578 [2024-11-25 10:33:32.802740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:38.578 [2024-11-25 10:33:32.802756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:27:38.578 [2024-11-25 10:33:32.802777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.578 [2024-11-25 10:33:32.810357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.578 [2024-11-25 10:33:32.810432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:38.578 [2024-11-25 10:33:32.810448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.551 ms 00:27:38.578 [2024-11-25 10:33:32.810460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.578 [2024-11-25 10:33:32.841127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.578 [2024-11-25 10:33:32.841175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:38.578 [2024-11-25 10:33:32.841193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.583 ms 00:27:38.578 [2024-11-25 10:33:32.841205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.578 [2024-11-25 10:33:32.859677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.578 [2024-11-25 10:33:32.859727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:38.578 [2024-11-25 10:33:32.859753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.398 ms 00:27:38.578 [2024-11-25 10:33:32.859766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.578 [2024-11-25 10:33:32.859951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.578 [2024-11-25 10:33:32.859973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:38.578 [2024-11-25 10:33:32.859987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:27:38.578 [2024-11-25 10:33:32.859999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.578 [2024-11-25 10:33:32.891157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.578 [2024-11-25 10:33:32.891203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:38.578 [2024-11-25 10:33:32.891219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.116 ms 00:27:38.578 [2024-11-25 10:33:32.891231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.838 [2024-11-25 10:33:32.921762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.838 [2024-11-25 10:33:32.921824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:38.838 [2024-11-25 10:33:32.921842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.425 ms 00:27:38.838 [2024-11-25 10:33:32.921854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.838 [2024-11-25 10:33:32.952023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.838 [2024-11-25 10:33:32.952069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:38.838 [2024-11-25 10:33:32.952086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.101 ms 00:27:38.838 [2024-11-25 10:33:32.952097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.838 [2024-11-25 10:33:32.982160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.838 [2024-11-25 10:33:32.982206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:38.838 [2024-11-25 10:33:32.982223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.953 ms 00:27:38.838 [2024-11-25 10:33:32.982236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.838 [2024-11-25 10:33:32.982308] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:38.838 [2024-11-25 10:33:32.982334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:38.838 [2024-11-25 10:33:32.982918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.982930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.982941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.982953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.982964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.982984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.982996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:38.839 [2024-11-25 10:33:32.983628] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:38.839 [2024-11-25 10:33:32.983640] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 320fbe64-4a13-4fdf-8f16-2944badb2627 00:27:38.839 [2024-11-25 10:33:32.983652] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:38.839 [2024-11-25 10:33:32.983663] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:38.839 [2024-11-25 10:33:32.983674] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:38.839 [2024-11-25 10:33:32.983686] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:38.839 [2024-11-25 10:33:32.983697] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:38.839 [2024-11-25 10:33:32.983708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:38.839 [2024-11-25 10:33:32.983725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:38.839 [2024-11-25 10:33:32.983735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:38.839 [2024-11-25 10:33:32.983745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:38.839 [2024-11-25 10:33:32.983756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.839 [2024-11-25 10:33:32.983768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:38.839 [2024-11-25 10:33:32.983796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:27:38.839 [2024-11-25 10:33:32.983808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.839 [2024-11-25 10:33:33.000917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.839 [2024-11-25 10:33:33.000960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:38.839 [2024-11-25 10:33:33.000978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.081 ms 00:27:38.839 [2024-11-25 10:33:33.000990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.839 [2024-11-25 10:33:33.001511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.839 [2024-11-25 10:33:33.001545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:38.839 [2024-11-25 10:33:33.001560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:27:38.839 [2024-11-25 10:33:33.001572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.839 [2024-11-25 10:33:33.049308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.839 [2024-11-25 10:33:33.049368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:38.839 [2024-11-25 10:33:33.049385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.839 [2024-11-25 10:33:33.049403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.839 [2024-11-25 10:33:33.049530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.839 [2024-11-25 10:33:33.049549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:38.839 [2024-11-25 10:33:33.049562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.839 [2024-11-25 10:33:33.049573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.839 [2024-11-25 10:33:33.049650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.839 [2024-11-25 10:33:33.049669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:38.839 [2024-11-25 10:33:33.049682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.839 [2024-11-25 10:33:33.049694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.839 [2024-11-25 10:33:33.049730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.839 [2024-11-25 10:33:33.049745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:38.839 [2024-11-25 10:33:33.049757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.839 [2024-11-25 10:33:33.049783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.839 [2024-11-25 10:33:33.158532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.839 [2024-11-25 10:33:33.158611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.839 [2024-11-25 10:33:33.158630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.839 [2024-11-25 10:33:33.158643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.098 [2024-11-25 10:33:33.244855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.098 [2024-11-25 10:33:33.244927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:39.098 [2024-11-25 10:33:33.244947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.098 [2024-11-25 10:33:33.244960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.098 [2024-11-25 10:33:33.245058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.098 [2024-11-25 10:33:33.245076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:39.098 [2024-11-25 10:33:33.245089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.098 [2024-11-25 10:33:33.245101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.098 [2024-11-25 10:33:33.245142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.098 [2024-11-25 10:33:33.245165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:39.098 [2024-11-25 10:33:33.245184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.098 [2024-11-25 10:33:33.245195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.098 [2024-11-25 10:33:33.245332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.098 [2024-11-25 10:33:33.245353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:39.098 [2024-11-25 10:33:33.245366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.098 [2024-11-25 10:33:33.245377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.098 [2024-11-25 10:33:33.245431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.098 [2024-11-25 10:33:33.245449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:39.098 [2024-11-25 10:33:33.245468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.098 [2024-11-25 10:33:33.245479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.098 [2024-11-25 10:33:33.245542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.098 [2024-11-25 10:33:33.245558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:39.098 [2024-11-25 10:33:33.245571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.098 [2024-11-25 10:33:33.245582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.098 [2024-11-25 10:33:33.245665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:39.098 [2024-11-25 10:33:33.245701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:39.098 [2024-11-25 10:33:33.245715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:39.098 [2024-11-25 10:33:33.245727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.098 [2024-11-25 10:33:33.245945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 451.756 ms, result 0 00:27:40.035 00:27:40.035 00:27:40.035 10:33:34 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:40.654 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:27:40.654 10:33:34 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:27:40.654 10:33:34 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:27:40.654 10:33:34 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:40.654 10:33:34 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:40.654 10:33:34 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:27:40.654 10:33:34 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:40.654 Process with pid 78959 is not found 00:27:40.654 10:33:34 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78959 00:27:40.654 10:33:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78959 ']' 00:27:40.654 10:33:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78959 00:27:40.654 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78959) - No such process 00:27:40.654 10:33:34 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78959 is not found' 00:27:40.654 00:27:40.654 real 1m12.167s 00:27:40.654 user 1m39.181s 00:27:40.654 sys 0m8.063s 00:27:40.654 10:33:34 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.654 ************************************ 00:27:40.654 END TEST ftl_trim 00:27:40.654 ************************************ 00:27:40.654 10:33:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:40.654 10:33:34 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:27:40.654 10:33:34 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:40.654 10:33:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.654 10:33:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:40.654 ************************************ 00:27:40.654 START TEST ftl_restore 00:27:40.654 ************************************ 00:27:40.654 10:33:34 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:27:40.932 * Looking for test storage... 00:27:40.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:40.932 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:40.932 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:27:40.932 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:40.932 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:27:40.932 10:33:35 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.933 10:33:35 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:27:40.933 10:33:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:27:40.933 10:33:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.933 10:33:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:27:40.933 10:33:35 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.933 10:33:35 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.933 10:33:35 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.933 10:33:35 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:40.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.933 --rc genhtml_branch_coverage=1 00:27:40.933 --rc genhtml_function_coverage=1 00:27:40.933 --rc genhtml_legend=1 00:27:40.933 --rc geninfo_all_blocks=1 00:27:40.933 --rc geninfo_unexecuted_blocks=1 00:27:40.933 00:27:40.933 ' 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:40.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.933 --rc genhtml_branch_coverage=1 00:27:40.933 --rc genhtml_function_coverage=1 00:27:40.933 --rc genhtml_legend=1 00:27:40.933 --rc geninfo_all_blocks=1 00:27:40.933 --rc geninfo_unexecuted_blocks=1 00:27:40.933 00:27:40.933 ' 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:40.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.933 --rc genhtml_branch_coverage=1 00:27:40.933 --rc genhtml_function_coverage=1 00:27:40.933 --rc genhtml_legend=1 00:27:40.933 --rc geninfo_all_blocks=1 00:27:40.933 --rc geninfo_unexecuted_blocks=1 00:27:40.933 00:27:40.933 ' 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:40.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.933 --rc genhtml_branch_coverage=1 00:27:40.933 --rc genhtml_function_coverage=1 00:27:40.933 --rc genhtml_legend=1 00:27:40.933 --rc geninfo_all_blocks=1 00:27:40.933 --rc geninfo_unexecuted_blocks=1 00:27:40.933 00:27:40.933 ' 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.VDHCrwDtx5 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79232 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79232 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79232 ']' 00:27:40.933 10:33:35 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.933 10:33:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:41.191 [2024-11-25 10:33:35.288234] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:27:41.191 [2024-11-25 10:33:35.288423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79232 ] 00:27:41.191 [2024-11-25 10:33:35.475523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.451 [2024-11-25 10:33:35.611332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.385 10:33:36 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.385 10:33:36 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:27:42.385 10:33:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:42.385 10:33:36 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:27:42.385 10:33:36 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:42.385 10:33:36 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:27:42.385 10:33:36 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:27:42.385 10:33:36 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:42.643 10:33:36 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:42.643 10:33:36 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:27:42.643 10:33:36 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:42.643 10:33:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:42.643 10:33:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:42.643 10:33:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:27:42.643 10:33:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:27:42.643 10:33:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:42.902 10:33:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:42.902 { 00:27:42.902 "name": "nvme0n1", 00:27:42.902 "aliases": [ 00:27:42.902 "93f3a620-5544-46ef-844d-458602149d15" 00:27:42.902 ], 00:27:42.902 "product_name": "NVMe disk", 00:27:42.902 "block_size": 4096, 00:27:42.902 "num_blocks": 1310720, 00:27:42.902 "uuid": "93f3a620-5544-46ef-844d-458602149d15", 00:27:42.902 "numa_id": -1, 00:27:42.902 "assigned_rate_limits": { 00:27:42.902 "rw_ios_per_sec": 0, 00:27:42.902 "rw_mbytes_per_sec": 0, 00:27:42.902 "r_mbytes_per_sec": 0, 00:27:42.902 "w_mbytes_per_sec": 0 00:27:42.902 }, 00:27:42.902 "claimed": true, 00:27:42.902 "claim_type": "read_many_write_one", 00:27:42.902 "zoned": false, 00:27:42.902 "supported_io_types": { 00:27:42.902 "read": true, 00:27:42.902 "write": true, 00:27:42.902 "unmap": true, 00:27:42.902 "flush": true, 00:27:42.902 "reset": true, 00:27:42.902 "nvme_admin": true, 00:27:42.902 "nvme_io": true, 00:27:42.902 "nvme_io_md": false, 00:27:42.902 "write_zeroes": true, 00:27:42.902 "zcopy": false, 00:27:42.902 "get_zone_info": false, 00:27:42.902 "zone_management": false, 00:27:42.902 "zone_append": false, 00:27:42.902 "compare": true, 00:27:42.902 "compare_and_write": false, 00:27:42.902 "abort": true, 00:27:42.902 "seek_hole": false, 00:27:42.902 "seek_data": false, 00:27:42.902 "copy": true, 00:27:42.902 "nvme_iov_md": false 00:27:42.902 }, 00:27:42.902 "driver_specific": { 00:27:42.902 "nvme": [ 00:27:42.902 { 00:27:42.902 "pci_address": "0000:00:11.0", 00:27:42.902 "trid": { 00:27:42.902 "trtype": "PCIe", 00:27:42.902 "traddr": "0000:00:11.0" 00:27:42.902 }, 00:27:42.902 "ctrlr_data": { 00:27:42.902 "cntlid": 0, 00:27:42.902 "vendor_id": "0x1b36", 00:27:42.902 "model_number": "QEMU NVMe Ctrl", 00:27:42.902 "serial_number": "12341", 00:27:42.902 "firmware_revision": "8.0.0", 00:27:42.902 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:42.902 "oacs": { 00:27:42.902 "security": 0, 00:27:42.902 "format": 1, 00:27:42.902 "firmware": 0, 00:27:42.902 "ns_manage": 1 00:27:42.902 }, 00:27:42.902 "multi_ctrlr": false, 00:27:42.902 "ana_reporting": false 00:27:42.902 }, 00:27:42.902 "vs": { 00:27:42.902 "nvme_version": "1.4" 00:27:42.902 }, 00:27:42.902 "ns_data": { 00:27:42.902 "id": 1, 00:27:42.902 "can_share": false 00:27:42.902 } 00:27:42.902 } 00:27:42.902 ], 00:27:42.902 "mp_policy": "active_passive" 00:27:42.902 } 00:27:42.902 } 00:27:42.902 ]' 00:27:42.902 10:33:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:42.902 10:33:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:27:42.902 10:33:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:43.161 10:33:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:43.161 10:33:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:43.161 10:33:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:27:43.161 10:33:37 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:27:43.161 10:33:37 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:43.161 10:33:37 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:27:43.161 10:33:37 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:43.161 10:33:37 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:43.420 10:33:37 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=3bdcbaea-9339-403f-9758-8d4613c774a8 00:27:43.420 10:33:37 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:27:43.420 10:33:37 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3bdcbaea-9339-403f-9758-8d4613c774a8 00:27:43.680 10:33:37 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:43.938 10:33:38 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=45b2af95-278b-42b9-a2fe-03a52a784609 00:27:43.938 10:33:38 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 45b2af95-278b-42b9-a2fe-03a52a784609 00:27:44.197 10:33:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:44.197 10:33:38 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:44.197 10:33:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:44.197 10:33:38 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:27:44.197 10:33:38 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:44.197 10:33:38 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:44.197 10:33:38 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:27:44.197 10:33:38 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:44.197 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:44.197 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:44.197 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:27:44.197 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:27:44.197 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:44.456 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:44.456 { 00:27:44.456 "name": "8fb941ce-6649-4f7e-9ce2-0d0ce553be82", 00:27:44.456 "aliases": [ 00:27:44.457 "lvs/nvme0n1p0" 00:27:44.457 ], 00:27:44.457 "product_name": "Logical Volume", 00:27:44.457 "block_size": 4096, 00:27:44.457 "num_blocks": 26476544, 00:27:44.457 "uuid": "8fb941ce-6649-4f7e-9ce2-0d0ce553be82", 00:27:44.457 "assigned_rate_limits": { 00:27:44.457 "rw_ios_per_sec": 0, 00:27:44.457 "rw_mbytes_per_sec": 0, 00:27:44.457 "r_mbytes_per_sec": 0, 00:27:44.457 "w_mbytes_per_sec": 0 00:27:44.457 }, 00:27:44.457 "claimed": false, 00:27:44.457 "zoned": false, 00:27:44.457 "supported_io_types": { 00:27:44.457 "read": true, 00:27:44.457 "write": true, 00:27:44.457 "unmap": true, 00:27:44.457 "flush": false, 00:27:44.457 "reset": true, 00:27:44.457 "nvme_admin": false, 00:27:44.457 "nvme_io": false, 00:27:44.457 "nvme_io_md": false, 00:27:44.457 "write_zeroes": true, 00:27:44.457 "zcopy": false, 00:27:44.457 "get_zone_info": false, 00:27:44.457 "zone_management": false, 00:27:44.457 "zone_append": false, 00:27:44.457 "compare": false, 00:27:44.457 "compare_and_write": false, 00:27:44.457 "abort": false, 00:27:44.457 "seek_hole": true, 00:27:44.457 "seek_data": true, 00:27:44.457 "copy": false, 00:27:44.457 "nvme_iov_md": false 00:27:44.457 }, 00:27:44.457 "driver_specific": { 00:27:44.457 "lvol": { 00:27:44.457 "lvol_store_uuid": "45b2af95-278b-42b9-a2fe-03a52a784609", 00:27:44.457 "base_bdev": "nvme0n1", 00:27:44.457 "thin_provision": true, 00:27:44.457 "num_allocated_clusters": 0, 00:27:44.457 "snapshot": false, 00:27:44.457 "clone": false, 00:27:44.457 "esnap_clone": false 00:27:44.457 } 00:27:44.457 } 00:27:44.457 } 00:27:44.457 ]' 00:27:44.457 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:44.715 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:27:44.715 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:44.715 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:44.715 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:44.715 10:33:38 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:27:44.715 10:33:38 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:27:44.715 10:33:38 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:27:44.715 10:33:38 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:44.974 10:33:39 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:44.974 10:33:39 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:44.974 10:33:39 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:44.974 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:44.974 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:44.974 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:27:44.974 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:27:44.974 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:45.232 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:45.232 { 00:27:45.232 "name": "8fb941ce-6649-4f7e-9ce2-0d0ce553be82", 00:27:45.232 "aliases": [ 00:27:45.232 "lvs/nvme0n1p0" 00:27:45.232 ], 00:27:45.232 "product_name": "Logical Volume", 00:27:45.232 "block_size": 4096, 00:27:45.232 "num_blocks": 26476544, 00:27:45.232 "uuid": "8fb941ce-6649-4f7e-9ce2-0d0ce553be82", 00:27:45.232 "assigned_rate_limits": { 00:27:45.232 "rw_ios_per_sec": 0, 00:27:45.232 "rw_mbytes_per_sec": 0, 00:27:45.232 "r_mbytes_per_sec": 0, 00:27:45.232 "w_mbytes_per_sec": 0 00:27:45.232 }, 00:27:45.232 "claimed": false, 00:27:45.232 "zoned": false, 00:27:45.232 "supported_io_types": { 00:27:45.232 "read": true, 00:27:45.232 "write": true, 00:27:45.232 "unmap": true, 00:27:45.232 "flush": false, 00:27:45.232 "reset": true, 00:27:45.232 "nvme_admin": false, 00:27:45.232 "nvme_io": false, 00:27:45.232 "nvme_io_md": false, 00:27:45.232 "write_zeroes": true, 00:27:45.232 "zcopy": false, 00:27:45.232 "get_zone_info": false, 00:27:45.232 "zone_management": false, 00:27:45.232 "zone_append": false, 00:27:45.232 "compare": false, 00:27:45.232 "compare_and_write": false, 00:27:45.232 "abort": false, 00:27:45.232 "seek_hole": true, 00:27:45.232 "seek_data": true, 00:27:45.232 "copy": false, 00:27:45.232 "nvme_iov_md": false 00:27:45.232 }, 00:27:45.232 "driver_specific": { 00:27:45.232 "lvol": { 00:27:45.232 "lvol_store_uuid": "45b2af95-278b-42b9-a2fe-03a52a784609", 00:27:45.232 "base_bdev": "nvme0n1", 00:27:45.232 "thin_provision": true, 00:27:45.232 "num_allocated_clusters": 0, 00:27:45.232 "snapshot": false, 00:27:45.232 "clone": false, 00:27:45.232 "esnap_clone": false 00:27:45.232 } 00:27:45.232 } 00:27:45.232 } 00:27:45.232 ]' 00:27:45.232 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:45.232 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:27:45.232 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:45.232 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:45.232 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:45.232 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:27:45.232 10:33:39 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:27:45.232 10:33:39 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:45.490 10:33:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:45.490 10:33:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:45.490 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:45.490 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:45.490 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:27:45.490 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:27:45.490 10:33:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fb941ce-6649-4f7e-9ce2-0d0ce553be82 00:27:46.057 10:33:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:46.057 { 00:27:46.057 "name": "8fb941ce-6649-4f7e-9ce2-0d0ce553be82", 00:27:46.057 "aliases": [ 00:27:46.057 "lvs/nvme0n1p0" 00:27:46.057 ], 00:27:46.057 "product_name": "Logical Volume", 00:27:46.057 "block_size": 4096, 00:27:46.057 "num_blocks": 26476544, 00:27:46.057 "uuid": "8fb941ce-6649-4f7e-9ce2-0d0ce553be82", 00:27:46.057 "assigned_rate_limits": { 00:27:46.057 "rw_ios_per_sec": 0, 00:27:46.057 "rw_mbytes_per_sec": 0, 00:27:46.057 "r_mbytes_per_sec": 0, 00:27:46.057 "w_mbytes_per_sec": 0 00:27:46.057 }, 00:27:46.057 "claimed": false, 00:27:46.057 "zoned": false, 00:27:46.057 "supported_io_types": { 00:27:46.057 "read": true, 00:27:46.057 "write": true, 00:27:46.057 "unmap": true, 00:27:46.057 "flush": false, 00:27:46.057 "reset": true, 00:27:46.057 "nvme_admin": false, 00:27:46.057 "nvme_io": false, 00:27:46.057 "nvme_io_md": false, 00:27:46.057 "write_zeroes": true, 00:27:46.057 "zcopy": false, 00:27:46.057 "get_zone_info": false, 00:27:46.057 "zone_management": false, 00:27:46.057 "zone_append": false, 00:27:46.057 "compare": false, 00:27:46.057 "compare_and_write": false, 00:27:46.057 "abort": false, 00:27:46.057 "seek_hole": true, 00:27:46.057 "seek_data": true, 00:27:46.057 "copy": false, 00:27:46.057 "nvme_iov_md": false 00:27:46.057 }, 00:27:46.057 "driver_specific": { 00:27:46.057 "lvol": { 00:27:46.057 "lvol_store_uuid": "45b2af95-278b-42b9-a2fe-03a52a784609", 00:27:46.057 "base_bdev": "nvme0n1", 00:27:46.057 "thin_provision": true, 00:27:46.057 "num_allocated_clusters": 0, 00:27:46.057 "snapshot": false, 00:27:46.057 "clone": false, 00:27:46.057 "esnap_clone": false 00:27:46.057 } 00:27:46.057 } 00:27:46.057 } 00:27:46.057 ]' 00:27:46.057 10:33:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:46.057 10:33:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:27:46.057 10:33:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:46.057 10:33:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:46.057 10:33:40 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:46.057 10:33:40 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:27:46.057 10:33:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:46.057 10:33:40 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8fb941ce-6649-4f7e-9ce2-0d0ce553be82 --l2p_dram_limit 10' 00:27:46.057 10:33:40 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:46.057 10:33:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:46.057 10:33:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:46.057 10:33:40 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:27:46.057 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:27:46.057 10:33:40 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8fb941ce-6649-4f7e-9ce2-0d0ce553be82 --l2p_dram_limit 10 -c nvc0n1p0 00:27:46.317 [2024-11-25 10:33:40.456193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.456438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:46.317 [2024-11-25 10:33:40.456580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:46.317 [2024-11-25 10:33:40.456606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.456706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.456726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:46.317 [2024-11-25 10:33:40.456743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:46.317 [2024-11-25 10:33:40.456755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.456822] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:46.317 [2024-11-25 10:33:40.458006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:46.317 [2024-11-25 10:33:40.458108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.458263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:46.317 [2024-11-25 10:33:40.458408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:27:46.317 [2024-11-25 10:33:40.458464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.458712] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a4a74b76-6e08-405f-b94b-34432b8ef08f 00:27:46.317 [2024-11-25 10:33:40.460637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.460816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:46.317 [2024-11-25 10:33:40.460943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:46.317 [2024-11-25 10:33:40.461093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.470885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.471107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:46.317 [2024-11-25 10:33:40.471236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.658 ms 00:27:46.317 [2024-11-25 10:33:40.471292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.471467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.471504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:46.317 [2024-11-25 10:33:40.471519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:27:46.317 [2024-11-25 10:33:40.471539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.471631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.471656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:46.317 [2024-11-25 10:33:40.471670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:46.317 [2024-11-25 10:33:40.471689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.471725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:46.317 [2024-11-25 10:33:40.476998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.477157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:46.317 [2024-11-25 10:33:40.477193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.279 ms 00:27:46.317 [2024-11-25 10:33:40.477208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.477260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.477277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:46.317 [2024-11-25 10:33:40.477293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:46.317 [2024-11-25 10:33:40.477305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.477358] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:46.317 [2024-11-25 10:33:40.477528] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:46.317 [2024-11-25 10:33:40.477554] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:46.317 [2024-11-25 10:33:40.477571] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:46.317 [2024-11-25 10:33:40.477589] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:46.317 [2024-11-25 10:33:40.477603] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:46.317 [2024-11-25 10:33:40.477618] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:46.317 [2024-11-25 10:33:40.477630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:46.317 [2024-11-25 10:33:40.477647] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:46.317 [2024-11-25 10:33:40.477659] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:46.317 [2024-11-25 10:33:40.477674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.477686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:46.317 [2024-11-25 10:33:40.477701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:27:46.317 [2024-11-25 10:33:40.477726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.477844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.317 [2024-11-25 10:33:40.477862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:46.317 [2024-11-25 10:33:40.477878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:27:46.317 [2024-11-25 10:33:40.477890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.317 [2024-11-25 10:33:40.478017] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:46.317 [2024-11-25 10:33:40.478037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:46.317 [2024-11-25 10:33:40.478052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:46.317 [2024-11-25 10:33:40.478064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:46.317 [2024-11-25 10:33:40.478090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:46.317 [2024-11-25 10:33:40.478114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:46.317 [2024-11-25 10:33:40.478128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:46.317 [2024-11-25 10:33:40.478152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:46.317 [2024-11-25 10:33:40.478163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:46.317 [2024-11-25 10:33:40.478177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:46.317 [2024-11-25 10:33:40.478188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:46.317 [2024-11-25 10:33:40.478201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:46.317 [2024-11-25 10:33:40.478212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:46.317 [2024-11-25 10:33:40.478238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:46.317 [2024-11-25 10:33:40.478255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:46.317 [2024-11-25 10:33:40.478280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:46.317 [2024-11-25 10:33:40.478305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:46.317 [2024-11-25 10:33:40.478316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:46.317 [2024-11-25 10:33:40.478340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:46.317 [2024-11-25 10:33:40.478353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:46.317 [2024-11-25 10:33:40.478392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:46.317 [2024-11-25 10:33:40.478407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:46.317 [2024-11-25 10:33:40.478432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:46.317 [2024-11-25 10:33:40.478448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:46.317 [2024-11-25 10:33:40.478459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:46.317 [2024-11-25 10:33:40.478472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:46.317 [2024-11-25 10:33:40.478483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:46.317 [2024-11-25 10:33:40.478503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:46.317 [2024-11-25 10:33:40.478513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:46.317 [2024-11-25 10:33:40.478527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:46.318 [2024-11-25 10:33:40.478537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.318 [2024-11-25 10:33:40.478551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:46.318 [2024-11-25 10:33:40.478563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:46.318 [2024-11-25 10:33:40.478576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.318 [2024-11-25 10:33:40.478587] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:46.318 [2024-11-25 10:33:40.478602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:46.318 [2024-11-25 10:33:40.478613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:46.318 [2024-11-25 10:33:40.478630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.318 [2024-11-25 10:33:40.478643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:46.318 [2024-11-25 10:33:40.478659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:46.318 [2024-11-25 10:33:40.478670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:46.318 [2024-11-25 10:33:40.478686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:46.318 [2024-11-25 10:33:40.478697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:46.318 [2024-11-25 10:33:40.478711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:46.318 [2024-11-25 10:33:40.478727] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:46.318 [2024-11-25 10:33:40.478744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.318 [2024-11-25 10:33:40.478760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:46.318 [2024-11-25 10:33:40.478790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:46.318 [2024-11-25 10:33:40.478804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:46.318 [2024-11-25 10:33:40.478819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:46.318 [2024-11-25 10:33:40.478831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:46.318 [2024-11-25 10:33:40.478845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:46.318 [2024-11-25 10:33:40.478857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:46.318 [2024-11-25 10:33:40.478871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:46.318 [2024-11-25 10:33:40.478883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:46.318 [2024-11-25 10:33:40.478900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:46.318 [2024-11-25 10:33:40.478912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:46.318 [2024-11-25 10:33:40.478926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:46.318 [2024-11-25 10:33:40.478938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:46.318 [2024-11-25 10:33:40.478955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:46.318 [2024-11-25 10:33:40.478967] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:46.318 [2024-11-25 10:33:40.478983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.318 [2024-11-25 10:33:40.478996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:46.318 [2024-11-25 10:33:40.479012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:46.318 [2024-11-25 10:33:40.479025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:46.318 [2024-11-25 10:33:40.479040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:46.318 [2024-11-25 10:33:40.479054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.318 [2024-11-25 10:33:40.479069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:46.318 [2024-11-25 10:33:40.479082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:27:46.318 [2024-11-25 10:33:40.479096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.318 [2024-11-25 10:33:40.479156] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:46.318 [2024-11-25 10:33:40.479179] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:49.599 [2024-11-25 10:33:43.413883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.599 [2024-11-25 10:33:43.414002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:49.599 [2024-11-25 10:33:43.414031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2934.738 ms 00:27:49.599 [2024-11-25 10:33:43.414054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.599 [2024-11-25 10:33:43.459486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.599 [2024-11-25 10:33:43.459592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:49.599 [2024-11-25 10:33:43.459622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.105 ms 00:27:49.599 [2024-11-25 10:33:43.459642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.599 [2024-11-25 10:33:43.459937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.599 [2024-11-25 10:33:43.459972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:49.599 [2024-11-25 10:33:43.459993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:49.600 [2024-11-25 10:33:43.460015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.510336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.510452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.600 [2024-11-25 10:33:43.510481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.195 ms 00:27:49.600 [2024-11-25 10:33:43.510502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.510591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.510627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.600 [2024-11-25 10:33:43.510649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:49.600 [2024-11-25 10:33:43.510668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.511615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.511658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.600 [2024-11-25 10:33:43.511678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:27:49.600 [2024-11-25 10:33:43.511697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.511896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.511926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.600 [2024-11-25 10:33:43.511947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:27:49.600 [2024-11-25 10:33:43.511969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.536461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.536552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.600 [2024-11-25 10:33:43.536579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.452 ms 00:27:49.600 [2024-11-25 10:33:43.536599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.552256] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:49.600 [2024-11-25 10:33:43.557806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.558101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:49.600 [2024-11-25 10:33:43.558146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.973 ms 00:27:49.600 [2024-11-25 10:33:43.558166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.648985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.649298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:49.600 [2024-11-25 10:33:43.649345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.751 ms 00:27:49.600 [2024-11-25 10:33:43.649365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.649624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.649653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:49.600 [2024-11-25 10:33:43.649678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:27:49.600 [2024-11-25 10:33:43.649694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.681322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.681384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:49.600 [2024-11-25 10:33:43.681415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.499 ms 00:27:49.600 [2024-11-25 10:33:43.681433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.711947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.712196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:49.600 [2024-11-25 10:33:43.712240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.460 ms 00:27:49.600 [2024-11-25 10:33:43.712259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.713188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.713230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:49.600 [2024-11-25 10:33:43.713255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:27:49.600 [2024-11-25 10:33:43.713272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.803390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.803708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:49.600 [2024-11-25 10:33:43.803759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.016 ms 00:27:49.600 [2024-11-25 10:33:43.803814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.837153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.837236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:49.600 [2024-11-25 10:33:43.837270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.198 ms 00:27:49.600 [2024-11-25 10:33:43.837287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.867995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.868049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:49.600 [2024-11-25 10:33:43.868077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.641 ms 00:27:49.600 [2024-11-25 10:33:43.868093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.899175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.899227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:49.600 [2024-11-25 10:33:43.899254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.019 ms 00:27:49.600 [2024-11-25 10:33:43.899271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.899344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.899367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:49.600 [2024-11-25 10:33:43.899392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:49.600 [2024-11-25 10:33:43.899408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.899564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.600 [2024-11-25 10:33:43.899589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:49.600 [2024-11-25 10:33:43.899616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:49.600 [2024-11-25 10:33:43.899631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.600 [2024-11-25 10:33:43.901322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3444.509 ms, result 0 00:27:49.600 { 00:27:49.600 "name": "ftl0", 00:27:49.600 "uuid": "a4a74b76-6e08-405f-b94b-34432b8ef08f" 00:27:49.600 } 00:27:49.600 10:33:43 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:49.600 10:33:43 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:50.167 10:33:44 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:27:50.167 10:33:44 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:50.466 [2024-11-25 10:33:44.568594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.568727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:50.466 [2024-11-25 10:33:44.568758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:50.466 [2024-11-25 10:33:44.568828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.568894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:50.466 [2024-11-25 10:33:44.572945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.572990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:50.466 [2024-11-25 10:33:44.573017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.008 ms 00:27:50.466 [2024-11-25 10:33:44.573034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.573424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.573459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:50.466 [2024-11-25 10:33:44.573489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:27:50.466 [2024-11-25 10:33:44.573506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.576721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.577056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:50.466 [2024-11-25 10:33:44.577098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.181 ms 00:27:50.466 [2024-11-25 10:33:44.577116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.583689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.583736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:50.466 [2024-11-25 10:33:44.583767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.522 ms 00:27:50.466 [2024-11-25 10:33:44.583808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.616518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.616597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:50.466 [2024-11-25 10:33:44.616628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.574 ms 00:27:50.466 [2024-11-25 10:33:44.616645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.636536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.636608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:50.466 [2024-11-25 10:33:44.636640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.812 ms 00:27:50.466 [2024-11-25 10:33:44.636657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.636936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.636973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:50.466 [2024-11-25 10:33:44.636998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:27:50.466 [2024-11-25 10:33:44.637013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.669455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.669524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:50.466 [2024-11-25 10:33:44.669555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.397 ms 00:27:50.466 [2024-11-25 10:33:44.669572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.700336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.700420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:50.466 [2024-11-25 10:33:44.700451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.671 ms 00:27:50.466 [2024-11-25 10:33:44.700468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.731292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.731572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:50.466 [2024-11-25 10:33:44.731618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.736 ms 00:27:50.466 [2024-11-25 10:33:44.731637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.762090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.466 [2024-11-25 10:33:44.762341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:50.466 [2024-11-25 10:33:44.762398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.248 ms 00:27:50.466 [2024-11-25 10:33:44.762419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.466 [2024-11-25 10:33:44.762490] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:50.466 [2024-11-25 10:33:44.762523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.762990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:50.466 [2024-11-25 10:33:44.763230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.763982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:50.467 [2024-11-25 10:33:44.764465] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:50.467 [2024-11-25 10:33:44.764491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4a74b76-6e08-405f-b94b-34432b8ef08f 00:27:50.467 [2024-11-25 10:33:44.764507] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:50.467 [2024-11-25 10:33:44.764528] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:50.467 [2024-11-25 10:33:44.764543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:50.467 [2024-11-25 10:33:44.764568] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:50.727 [2024-11-25 10:33:44.764583] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:50.727 [2024-11-25 10:33:44.764602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:50.727 [2024-11-25 10:33:44.764617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:50.727 [2024-11-25 10:33:44.764634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:50.727 [2024-11-25 10:33:44.764647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:50.727 [2024-11-25 10:33:44.764665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.727 [2024-11-25 10:33:44.764680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:50.727 [2024-11-25 10:33:44.764699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:27:50.727 [2024-11-25 10:33:44.764714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.727 [2024-11-25 10:33:44.782583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.727 [2024-11-25 10:33:44.782648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:50.727 [2024-11-25 10:33:44.782676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.750 ms 00:27:50.727 [2024-11-25 10:33:44.782693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.727 [2024-11-25 10:33:44.783255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.727 [2024-11-25 10:33:44.783294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:50.727 [2024-11-25 10:33:44.783318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:27:50.727 [2024-11-25 10:33:44.783338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.727 [2024-11-25 10:33:44.842783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.727 [2024-11-25 10:33:44.842880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:50.727 [2024-11-25 10:33:44.842911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.727 [2024-11-25 10:33:44.842928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.727 [2024-11-25 10:33:44.843052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.727 [2024-11-25 10:33:44.843075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:50.727 [2024-11-25 10:33:44.843096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.727 [2024-11-25 10:33:44.843117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.727 [2024-11-25 10:33:44.843339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.727 [2024-11-25 10:33:44.843366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:50.727 [2024-11-25 10:33:44.843387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.727 [2024-11-25 10:33:44.843403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.727 [2024-11-25 10:33:44.843449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.727 [2024-11-25 10:33:44.843468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:50.727 [2024-11-25 10:33:44.843487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.727 [2024-11-25 10:33:44.843503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.727 [2024-11-25 10:33:44.959945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.727 [2024-11-25 10:33:44.960051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:50.727 [2024-11-25 10:33:44.960084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.727 [2024-11-25 10:33:44.960102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.727 [2024-11-25 10:33:45.050608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.728 [2024-11-25 10:33:45.050725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:50.728 [2024-11-25 10:33:45.050759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.728 [2024-11-25 10:33:45.050817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.728 [2024-11-25 10:33:45.051033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.728 [2024-11-25 10:33:45.051059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:50.728 [2024-11-25 10:33:45.051081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.728 [2024-11-25 10:33:45.051097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.728 [2024-11-25 10:33:45.051197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.728 [2024-11-25 10:33:45.051221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:50.728 [2024-11-25 10:33:45.051242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.728 [2024-11-25 10:33:45.051257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.728 [2024-11-25 10:33:45.051426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.728 [2024-11-25 10:33:45.051460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:50.728 [2024-11-25 10:33:45.051483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.728 [2024-11-25 10:33:45.051499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.728 [2024-11-25 10:33:45.051578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.728 [2024-11-25 10:33:45.051608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:50.728 [2024-11-25 10:33:45.051628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.728 [2024-11-25 10:33:45.051644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.728 [2024-11-25 10:33:45.051715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.728 [2024-11-25 10:33:45.051739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:50.728 [2024-11-25 10:33:45.051761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.728 [2024-11-25 10:33:45.051798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.728 [2024-11-25 10:33:45.051888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.728 [2024-11-25 10:33:45.051911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:50.728 [2024-11-25 10:33:45.051931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.728 [2024-11-25 10:33:45.051947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.728 [2024-11-25 10:33:45.052165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 483.528 ms, result 0 00:27:50.728 true 00:27:50.987 10:33:45 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79232 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79232 ']' 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79232 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79232 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:50.987 killing process with pid 79232 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79232' 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79232 00:27:50.987 10:33:45 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79232 00:27:56.253 10:33:49 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:01.519 262144+0 records in 00:28:01.519 262144+0 records out 00:28:01.519 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.14999 s, 208 MB/s 00:28:01.519 10:33:55 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:03.483 10:33:57 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:03.483 [2024-11-25 10:33:57.421405] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:28:03.483 [2024-11-25 10:33:57.421629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79489 ] 00:28:03.483 [2024-11-25 10:33:57.621316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.483 [2024-11-25 10:33:57.797941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.050 [2024-11-25 10:33:58.208746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:04.051 [2024-11-25 10:33:58.208862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:04.310 [2024-11-25 10:33:58.384636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.310 [2024-11-25 10:33:58.384737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:04.310 [2024-11-25 10:33:58.384802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:04.310 [2024-11-25 10:33:58.384819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.310 [2024-11-25 10:33:58.384908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.310 [2024-11-25 10:33:58.384929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:04.310 [2024-11-25 10:33:58.384956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:04.310 [2024-11-25 10:33:58.384968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.310 [2024-11-25 10:33:58.385000] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:04.310 [2024-11-25 10:33:58.386007] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:04.310 [2024-11-25 10:33:58.386043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.310 [2024-11-25 10:33:58.386073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:04.310 [2024-11-25 10:33:58.386087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:28:04.310 [2024-11-25 10:33:58.386100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.310 [2024-11-25 10:33:58.388708] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:04.310 [2024-11-25 10:33:58.406527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.310 [2024-11-25 10:33:58.406572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:04.310 [2024-11-25 10:33:58.406591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.820 ms 00:28:04.310 [2024-11-25 10:33:58.406610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.310 [2024-11-25 10:33:58.406709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.310 [2024-11-25 10:33:58.406730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:04.310 [2024-11-25 10:33:58.406743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:04.310 [2024-11-25 10:33:58.406755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.310 [2024-11-25 10:33:58.419279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.310 [2024-11-25 10:33:58.419360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:04.310 [2024-11-25 10:33:58.419380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.388 ms 00:28:04.310 [2024-11-25 10:33:58.419393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.310 [2024-11-25 10:33:58.419576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.311 [2024-11-25 10:33:58.419599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:04.311 [2024-11-25 10:33:58.419613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:28:04.311 [2024-11-25 10:33:58.419626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.311 [2024-11-25 10:33:58.419745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.311 [2024-11-25 10:33:58.419796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:04.311 [2024-11-25 10:33:58.419815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:04.311 [2024-11-25 10:33:58.419828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.311 [2024-11-25 10:33:58.419870] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:04.311 [2024-11-25 10:33:58.425675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.311 [2024-11-25 10:33:58.425728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:04.311 [2024-11-25 10:33:58.425745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.817 ms 00:28:04.311 [2024-11-25 10:33:58.425784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.311 [2024-11-25 10:33:58.425840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.311 [2024-11-25 10:33:58.425859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:04.311 [2024-11-25 10:33:58.425874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:04.311 [2024-11-25 10:33:58.425886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.311 [2024-11-25 10:33:58.425938] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:04.311 [2024-11-25 10:33:58.425982] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:04.311 [2024-11-25 10:33:58.426030] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:04.311 [2024-11-25 10:33:58.426061] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:04.311 [2024-11-25 10:33:58.426183] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:04.311 [2024-11-25 10:33:58.426200] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:04.311 [2024-11-25 10:33:58.426216] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:04.311 [2024-11-25 10:33:58.426234] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:04.311 [2024-11-25 10:33:58.426259] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:04.311 [2024-11-25 10:33:58.426276] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:04.311 [2024-11-25 10:33:58.426289] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:04.311 [2024-11-25 10:33:58.426301] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:04.311 [2024-11-25 10:33:58.426312] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:04.311 [2024-11-25 10:33:58.426338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.311 [2024-11-25 10:33:58.426350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:04.311 [2024-11-25 10:33:58.426364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:28:04.311 [2024-11-25 10:33:58.426375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.311 [2024-11-25 10:33:58.426491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.311 [2024-11-25 10:33:58.426510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:04.311 [2024-11-25 10:33:58.426523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:04.311 [2024-11-25 10:33:58.426535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.311 [2024-11-25 10:33:58.426672] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:04.311 [2024-11-25 10:33:58.426711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:04.311 [2024-11-25 10:33:58.426726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.311 [2024-11-25 10:33:58.426738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.311 [2024-11-25 10:33:58.426751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:04.311 [2024-11-25 10:33:58.426786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:04.311 [2024-11-25 10:33:58.426801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:04.311 [2024-11-25 10:33:58.426812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:04.311 [2024-11-25 10:33:58.426824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:04.311 [2024-11-25 10:33:58.426835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.311 [2024-11-25 10:33:58.426847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:04.311 [2024-11-25 10:33:58.426858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:04.311 [2024-11-25 10:33:58.426869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.311 [2024-11-25 10:33:58.426880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:04.311 [2024-11-25 10:33:58.426891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:04.311 [2024-11-25 10:33:58.426925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.311 [2024-11-25 10:33:58.426937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:04.311 [2024-11-25 10:33:58.426948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:04.311 [2024-11-25 10:33:58.426959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.311 [2024-11-25 10:33:58.426970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:04.311 [2024-11-25 10:33:58.426981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:04.311 [2024-11-25 10:33:58.426991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.311 [2024-11-25 10:33:58.427002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:04.311 [2024-11-25 10:33:58.427013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:04.311 [2024-11-25 10:33:58.427023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.311 [2024-11-25 10:33:58.427034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:04.311 [2024-11-25 10:33:58.427045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:04.311 [2024-11-25 10:33:58.427055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.311 [2024-11-25 10:33:58.427067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:04.311 [2024-11-25 10:33:58.427078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:04.311 [2024-11-25 10:33:58.427089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.311 [2024-11-25 10:33:58.427100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:04.311 [2024-11-25 10:33:58.427111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:04.311 [2024-11-25 10:33:58.427122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.311 [2024-11-25 10:33:58.427132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:04.311 [2024-11-25 10:33:58.427144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:04.311 [2024-11-25 10:33:58.427154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.311 [2024-11-25 10:33:58.427167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:04.311 [2024-11-25 10:33:58.427178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:04.311 [2024-11-25 10:33:58.427189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.311 [2024-11-25 10:33:58.427201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:04.311 [2024-11-25 10:33:58.427212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:04.311 [2024-11-25 10:33:58.427223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.311 [2024-11-25 10:33:58.427240] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:04.311 [2024-11-25 10:33:58.427260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:04.311 [2024-11-25 10:33:58.427275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.311 [2024-11-25 10:33:58.427287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.311 [2024-11-25 10:33:58.427300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:04.311 [2024-11-25 10:33:58.427311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:04.311 [2024-11-25 10:33:58.427322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:04.311 [2024-11-25 10:33:58.427333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:04.311 [2024-11-25 10:33:58.427344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:04.311 [2024-11-25 10:33:58.427355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:04.311 [2024-11-25 10:33:58.427368] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:04.311 [2024-11-25 10:33:58.427383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.311 [2024-11-25 10:33:58.427396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:04.311 [2024-11-25 10:33:58.427409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:04.311 [2024-11-25 10:33:58.427421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:04.311 [2024-11-25 10:33:58.427434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:04.311 [2024-11-25 10:33:58.427446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:04.311 [2024-11-25 10:33:58.427457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:04.311 [2024-11-25 10:33:58.427469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:04.311 [2024-11-25 10:33:58.427480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:04.312 [2024-11-25 10:33:58.427491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:04.312 [2024-11-25 10:33:58.427502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:04.312 [2024-11-25 10:33:58.427514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:04.312 [2024-11-25 10:33:58.427525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:04.312 [2024-11-25 10:33:58.427537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:04.312 [2024-11-25 10:33:58.427549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:04.312 [2024-11-25 10:33:58.427564] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:04.312 [2024-11-25 10:33:58.427592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.312 [2024-11-25 10:33:58.427607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:04.312 [2024-11-25 10:33:58.427619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:04.312 [2024-11-25 10:33:58.427632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:04.312 [2024-11-25 10:33:58.427643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:04.312 [2024-11-25 10:33:58.427656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.427669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:04.312 [2024-11-25 10:33:58.427682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:28:04.312 [2024-11-25 10:33:58.427693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.476808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.476899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:04.312 [2024-11-25 10:33:58.476922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.032 ms 00:28:04.312 [2024-11-25 10:33:58.476935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.477088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.477106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:04.312 [2024-11-25 10:33:58.477121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:04.312 [2024-11-25 10:33:58.477132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.538148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.538227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:04.312 [2024-11-25 10:33:58.538259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.841 ms 00:28:04.312 [2024-11-25 10:33:58.538275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.538368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.538400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:04.312 [2024-11-25 10:33:58.538423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:04.312 [2024-11-25 10:33:58.538436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.539306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.539340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:04.312 [2024-11-25 10:33:58.539356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:28:04.312 [2024-11-25 10:33:58.539369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.539563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.539584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:04.312 [2024-11-25 10:33:58.539598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:28:04.312 [2024-11-25 10:33:58.539618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.563283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.563333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:04.312 [2024-11-25 10:33:58.563362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.633 ms 00:28:04.312 [2024-11-25 10:33:58.563376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.581406] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:04.312 [2024-11-25 10:33:58.581454] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:04.312 [2024-11-25 10:33:58.581475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.581489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:04.312 [2024-11-25 10:33:58.581503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.936 ms 00:28:04.312 [2024-11-25 10:33:58.581515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.611408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.611453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:04.312 [2024-11-25 10:33:58.611479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.837 ms 00:28:04.312 [2024-11-25 10:33:58.611492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.312 [2024-11-25 10:33:58.626980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.312 [2024-11-25 10:33:58.627048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:04.312 [2024-11-25 10:33:58.627077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.443 ms 00:28:04.312 [2024-11-25 10:33:58.627089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.642259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.642303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:04.571 [2024-11-25 10:33:58.642320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.123 ms 00:28:04.571 [2024-11-25 10:33:58.642332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.643211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.643247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:04.571 [2024-11-25 10:33:58.643265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:28:04.571 [2024-11-25 10:33:58.643278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.730674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.730826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:04.571 [2024-11-25 10:33:58.730855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.353 ms 00:28:04.571 [2024-11-25 10:33:58.730880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.744079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:04.571 [2024-11-25 10:33:58.749287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.749331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:04.571 [2024-11-25 10:33:58.749350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.289 ms 00:28:04.571 [2024-11-25 10:33:58.749364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.749537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.749560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:04.571 [2024-11-25 10:33:58.749575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:04.571 [2024-11-25 10:33:58.749589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.749723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.749754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:04.571 [2024-11-25 10:33:58.749783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:04.571 [2024-11-25 10:33:58.749810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.749849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.749865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:04.571 [2024-11-25 10:33:58.749878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:04.571 [2024-11-25 10:33:58.749890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.749966] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:04.571 [2024-11-25 10:33:58.749991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.750009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:04.571 [2024-11-25 10:33:58.750023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:04.571 [2024-11-25 10:33:58.750035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.782297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.782349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:04.571 [2024-11-25 10:33:58.782368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.228 ms 00:28:04.571 [2024-11-25 10:33:58.782391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.782507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.571 [2024-11-25 10:33:58.782528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:04.571 [2024-11-25 10:33:58.782543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:04.571 [2024-11-25 10:33:58.782554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.571 [2024-11-25 10:33:58.784208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 398.956 ms, result 0 00:28:05.506  [2024-11-25T10:34:01.214Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-25T10:34:02.149Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-25T10:34:03.084Z] Copying: 71/1024 [MB] (22 MBps) [2024-11-25T10:34:04.078Z] Copying: 97/1024 [MB] (25 MBps) [2024-11-25T10:34:05.013Z] Copying: 121/1024 [MB] (24 MBps) [2024-11-25T10:34:05.948Z] Copying: 147/1024 [MB] (25 MBps) [2024-11-25T10:34:06.883Z] Copying: 173/1024 [MB] (26 MBps) [2024-11-25T10:34:07.818Z] Copying: 198/1024 [MB] (25 MBps) [2024-11-25T10:34:09.195Z] Copying: 224/1024 [MB] (25 MBps) [2024-11-25T10:34:10.130Z] Copying: 249/1024 [MB] (25 MBps) [2024-11-25T10:34:11.066Z] Copying: 275/1024 [MB] (25 MBps) [2024-11-25T10:34:12.087Z] Copying: 299/1024 [MB] (24 MBps) [2024-11-25T10:34:13.023Z] Copying: 323/1024 [MB] (23 MBps) [2024-11-25T10:34:13.958Z] Copying: 347/1024 [MB] (23 MBps) [2024-11-25T10:34:14.893Z] Copying: 371/1024 [MB] (24 MBps) [2024-11-25T10:34:15.829Z] Copying: 394/1024 [MB] (23 MBps) [2024-11-25T10:34:17.204Z] Copying: 418/1024 [MB] (24 MBps) [2024-11-25T10:34:18.141Z] Copying: 442/1024 [MB] (23 MBps) [2024-11-25T10:34:19.077Z] Copying: 466/1024 [MB] (24 MBps) [2024-11-25T10:34:20.015Z] Copying: 491/1024 [MB] (24 MBps) [2024-11-25T10:34:20.947Z] Copying: 515/1024 [MB] (24 MBps) [2024-11-25T10:34:21.881Z] Copying: 540/1024 [MB] (24 MBps) [2024-11-25T10:34:22.815Z] Copying: 564/1024 [MB] (24 MBps) [2024-11-25T10:34:24.190Z] Copying: 587/1024 [MB] (23 MBps) [2024-11-25T10:34:25.123Z] Copying: 611/1024 [MB] (23 MBps) [2024-11-25T10:34:26.061Z] Copying: 635/1024 [MB] (23 MBps) [2024-11-25T10:34:26.996Z] Copying: 659/1024 [MB] (24 MBps) [2024-11-25T10:34:27.932Z] Copying: 682/1024 [MB] (23 MBps) [2024-11-25T10:34:28.867Z] Copying: 706/1024 [MB] (24 MBps) [2024-11-25T10:34:29.800Z] Copying: 730/1024 [MB] (23 MBps) [2024-11-25T10:34:31.176Z] Copying: 754/1024 [MB] (23 MBps) [2024-11-25T10:34:32.110Z] Copying: 778/1024 [MB] (24 MBps) [2024-11-25T10:34:33.124Z] Copying: 803/1024 [MB] (24 MBps) [2024-11-25T10:34:34.066Z] Copying: 827/1024 [MB] (24 MBps) [2024-11-25T10:34:35.001Z] Copying: 852/1024 [MB] (24 MBps) [2024-11-25T10:34:35.935Z] Copying: 877/1024 [MB] (24 MBps) [2024-11-25T10:34:36.869Z] Copying: 902/1024 [MB] (25 MBps) [2024-11-25T10:34:37.804Z] Copying: 926/1024 [MB] (23 MBps) [2024-11-25T10:34:39.186Z] Copying: 951/1024 [MB] (25 MBps) [2024-11-25T10:34:40.151Z] Copying: 976/1024 [MB] (25 MBps) [2024-11-25T10:34:40.716Z] Copying: 1001/1024 [MB] (24 MBps) [2024-11-25T10:34:40.716Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-25 10:34:40.684446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.383 [2024-11-25 10:34:40.684520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:46.383 [2024-11-25 10:34:40.684543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:46.383 [2024-11-25 10:34:40.684557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.383 [2024-11-25 10:34:40.684588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:46.383 [2024-11-25 10:34:40.688280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.383 [2024-11-25 10:34:40.688339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:46.383 [2024-11-25 10:34:40.688356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:28:46.383 [2024-11-25 10:34:40.688368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.383 [2024-11-25 10:34:40.690162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.383 [2024-11-25 10:34:40.690208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:46.384 [2024-11-25 10:34:40.690226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:28:46.384 [2024-11-25 10:34:40.690237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.384 [2024-11-25 10:34:40.706514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.384 [2024-11-25 10:34:40.706563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:46.384 [2024-11-25 10:34:40.706582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.254 ms 00:28:46.384 [2024-11-25 10:34:40.706594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.384 [2024-11-25 10:34:40.713089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.384 [2024-11-25 10:34:40.713137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:46.384 [2024-11-25 10:34:40.713154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.453 ms 00:28:46.384 [2024-11-25 10:34:40.713165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.642 [2024-11-25 10:34:40.744634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.642 [2024-11-25 10:34:40.744694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:46.642 [2024-11-25 10:34:40.744712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.399 ms 00:28:46.642 [2024-11-25 10:34:40.744725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.642 [2024-11-25 10:34:40.762389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.642 [2024-11-25 10:34:40.762443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:46.642 [2024-11-25 10:34:40.762462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.604 ms 00:28:46.642 [2024-11-25 10:34:40.762474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.642 [2024-11-25 10:34:40.762610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.642 [2024-11-25 10:34:40.762632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:46.642 [2024-11-25 10:34:40.762666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:28:46.642 [2024-11-25 10:34:40.762687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.642 [2024-11-25 10:34:40.793444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.642 [2024-11-25 10:34:40.793513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:46.642 [2024-11-25 10:34:40.793542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.725 ms 00:28:46.642 [2024-11-25 10:34:40.793554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.642 [2024-11-25 10:34:40.824129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.642 [2024-11-25 10:34:40.824181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:46.642 [2024-11-25 10:34:40.824215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.525 ms 00:28:46.642 [2024-11-25 10:34:40.824227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.642 [2024-11-25 10:34:40.854241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.642 [2024-11-25 10:34:40.854295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:46.642 [2024-11-25 10:34:40.854313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.965 ms 00:28:46.642 [2024-11-25 10:34:40.854325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.642 [2024-11-25 10:34:40.884462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.642 [2024-11-25 10:34:40.884521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:46.642 [2024-11-25 10:34:40.884540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.991 ms 00:28:46.642 [2024-11-25 10:34:40.884555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.642 [2024-11-25 10:34:40.884611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:46.642 [2024-11-25 10:34:40.884638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.884999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:46.642 [2024-11-25 10:34:40.885610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.885999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:46.643 [2024-11-25 10:34:40.886203] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:46.643 [2024-11-25 10:34:40.886225] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4a74b76-6e08-405f-b94b-34432b8ef08f 00:28:46.643 [2024-11-25 10:34:40.886237] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:46.643 [2024-11-25 10:34:40.886252] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:46.643 [2024-11-25 10:34:40.886268] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:46.643 [2024-11-25 10:34:40.886290] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:46.643 [2024-11-25 10:34:40.886310] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:46.643 [2024-11-25 10:34:40.886328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:46.643 [2024-11-25 10:34:40.886347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:46.643 [2024-11-25 10:34:40.886372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:46.643 [2024-11-25 10:34:40.886383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:46.643 [2024-11-25 10:34:40.886395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.643 [2024-11-25 10:34:40.886426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:46.643 [2024-11-25 10:34:40.886442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:28:46.643 [2024-11-25 10:34:40.886462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.643 [2024-11-25 10:34:40.903833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.643 [2024-11-25 10:34:40.903895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:46.643 [2024-11-25 10:34:40.903916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.314 ms 00:28:46.643 [2024-11-25 10:34:40.903928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.643 [2024-11-25 10:34:40.904419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.643 [2024-11-25 10:34:40.904444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:46.643 [2024-11-25 10:34:40.904461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:28:46.643 [2024-11-25 10:34:40.904473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.643 [2024-11-25 10:34:40.948980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.643 [2024-11-25 10:34:40.949050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:46.643 [2024-11-25 10:34:40.949070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.643 [2024-11-25 10:34:40.949083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.643 [2024-11-25 10:34:40.949183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.643 [2024-11-25 10:34:40.949209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:46.643 [2024-11-25 10:34:40.949231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.643 [2024-11-25 10:34:40.949252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.643 [2024-11-25 10:34:40.949380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.643 [2024-11-25 10:34:40.949408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:46.643 [2024-11-25 10:34:40.949423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.643 [2024-11-25 10:34:40.949434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.643 [2024-11-25 10:34:40.949458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.643 [2024-11-25 10:34:40.949472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:46.643 [2024-11-25 10:34:40.949485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.643 [2024-11-25 10:34:40.949496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.901 [2024-11-25 10:34:41.060109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.901 [2024-11-25 10:34:41.060174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:46.901 [2024-11-25 10:34:41.060194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.901 [2024-11-25 10:34:41.060206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.901 [2024-11-25 10:34:41.146300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.901 [2024-11-25 10:34:41.146368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:46.901 [2024-11-25 10:34:41.146388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.901 [2024-11-25 10:34:41.146408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.901 [2024-11-25 10:34:41.146544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.901 [2024-11-25 10:34:41.146587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:46.901 [2024-11-25 10:34:41.146614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.901 [2024-11-25 10:34:41.146627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.901 [2024-11-25 10:34:41.146680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.901 [2024-11-25 10:34:41.146702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:46.901 [2024-11-25 10:34:41.146715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.901 [2024-11-25 10:34:41.146727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.901 [2024-11-25 10:34:41.146904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.901 [2024-11-25 10:34:41.146941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:46.901 [2024-11-25 10:34:41.146955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.901 [2024-11-25 10:34:41.146967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.901 [2024-11-25 10:34:41.147027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.901 [2024-11-25 10:34:41.147058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:46.901 [2024-11-25 10:34:41.147081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.901 [2024-11-25 10:34:41.147101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.901 [2024-11-25 10:34:41.147155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.901 [2024-11-25 10:34:41.147182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:46.901 [2024-11-25 10:34:41.147219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.901 [2024-11-25 10:34:41.147233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.901 [2024-11-25 10:34:41.147290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.901 [2024-11-25 10:34:41.147315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:46.901 [2024-11-25 10:34:41.147339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.901 [2024-11-25 10:34:41.147361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.901 [2024-11-25 10:34:41.147529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.043 ms, result 0 00:28:47.833 00:28:47.833 00:28:48.090 10:34:42 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:48.090 [2024-11-25 10:34:42.310607] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:28:48.090 [2024-11-25 10:34:42.310873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79932 ] 00:28:48.349 [2024-11-25 10:34:42.491431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.349 [2024-11-25 10:34:42.644557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.918 [2024-11-25 10:34:43.000980] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.918 [2024-11-25 10:34:43.001058] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.918 [2024-11-25 10:34:43.163747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.918 [2024-11-25 10:34:43.163821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:48.918 [2024-11-25 10:34:43.163850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:48.918 [2024-11-25 10:34:43.163863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.918 [2024-11-25 10:34:43.163930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.918 [2024-11-25 10:34:43.163949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:48.918 [2024-11-25 10:34:43.163967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:48.918 [2024-11-25 10:34:43.163978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.918 [2024-11-25 10:34:43.164008] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:48.918 [2024-11-25 10:34:43.164907] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:48.918 [2024-11-25 10:34:43.164946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.918 [2024-11-25 10:34:43.164959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:48.918 [2024-11-25 10:34:43.164973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:28:48.918 [2024-11-25 10:34:43.164985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.918 [2024-11-25 10:34:43.166903] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:48.918 [2024-11-25 10:34:43.183546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.918 [2024-11-25 10:34:43.183591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:48.918 [2024-11-25 10:34:43.183609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.645 ms 00:28:48.918 [2024-11-25 10:34:43.183622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.918 [2024-11-25 10:34:43.183701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.918 [2024-11-25 10:34:43.183720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:48.918 [2024-11-25 10:34:43.183733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:48.918 [2024-11-25 10:34:43.183745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.918 [2024-11-25 10:34:43.192246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.919 [2024-11-25 10:34:43.192293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:48.919 [2024-11-25 10:34:43.192310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.391 ms 00:28:48.919 [2024-11-25 10:34:43.192322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.919 [2024-11-25 10:34:43.192428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.919 [2024-11-25 10:34:43.192447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:48.919 [2024-11-25 10:34:43.192460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:48.919 [2024-11-25 10:34:43.192472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.919 [2024-11-25 10:34:43.192531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.919 [2024-11-25 10:34:43.192549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:48.919 [2024-11-25 10:34:43.192562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:48.919 [2024-11-25 10:34:43.192573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.919 [2024-11-25 10:34:43.192609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:48.919 [2024-11-25 10:34:43.197547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.919 [2024-11-25 10:34:43.197585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:48.919 [2024-11-25 10:34:43.197600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.946 ms 00:28:48.919 [2024-11-25 10:34:43.197618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.919 [2024-11-25 10:34:43.197656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.919 [2024-11-25 10:34:43.197671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:48.919 [2024-11-25 10:34:43.197684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:48.919 [2024-11-25 10:34:43.197696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.919 [2024-11-25 10:34:43.197763] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:48.919 [2024-11-25 10:34:43.197814] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:48.919 [2024-11-25 10:34:43.197858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:48.919 [2024-11-25 10:34:43.197883] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:48.919 [2024-11-25 10:34:43.197994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:48.919 [2024-11-25 10:34:43.198011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:48.919 [2024-11-25 10:34:43.198026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:48.919 [2024-11-25 10:34:43.198042] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:48.919 [2024-11-25 10:34:43.198056] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:48.919 [2024-11-25 10:34:43.198068] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:48.919 [2024-11-25 10:34:43.198080] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:48.919 [2024-11-25 10:34:43.198090] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:48.919 [2024-11-25 10:34:43.198101] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:48.919 [2024-11-25 10:34:43.198118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.919 [2024-11-25 10:34:43.198130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:48.919 [2024-11-25 10:34:43.198142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:28:48.919 [2024-11-25 10:34:43.198153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.919 [2024-11-25 10:34:43.198251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.919 [2024-11-25 10:34:43.198266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:48.919 [2024-11-25 10:34:43.198287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:48.919 [2024-11-25 10:34:43.198298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.919 [2024-11-25 10:34:43.198430] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:48.919 [2024-11-25 10:34:43.198456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:48.919 [2024-11-25 10:34:43.198468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.919 [2024-11-25 10:34:43.198480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:48.919 [2024-11-25 10:34:43.198505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:48.919 [2024-11-25 10:34:43.198527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:48.919 [2024-11-25 10:34:43.198538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.919 [2024-11-25 10:34:43.198559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:48.919 [2024-11-25 10:34:43.198570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:48.919 [2024-11-25 10:34:43.198580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.919 [2024-11-25 10:34:43.198591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:48.919 [2024-11-25 10:34:43.198602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:48.919 [2024-11-25 10:34:43.198624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:48.919 [2024-11-25 10:34:43.198646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:48.919 [2024-11-25 10:34:43.198656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:48.919 [2024-11-25 10:34:43.198678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.919 [2024-11-25 10:34:43.198699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:48.919 [2024-11-25 10:34:43.198710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.919 [2024-11-25 10:34:43.198731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:48.919 [2024-11-25 10:34:43.198741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.919 [2024-11-25 10:34:43.198762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:48.919 [2024-11-25 10:34:43.198794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.919 [2024-11-25 10:34:43.198820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:48.919 [2024-11-25 10:34:43.198831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.919 [2024-11-25 10:34:43.198852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:48.919 [2024-11-25 10:34:43.198863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:48.919 [2024-11-25 10:34:43.198873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.919 [2024-11-25 10:34:43.198885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:48.919 [2024-11-25 10:34:43.198896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:48.919 [2024-11-25 10:34:43.198906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.919 [2024-11-25 10:34:43.198917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:48.919 [2024-11-25 10:34:43.198928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:48.920 [2024-11-25 10:34:43.198944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.920 [2024-11-25 10:34:43.198955] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:48.920 [2024-11-25 10:34:43.198967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:48.920 [2024-11-25 10:34:43.198978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.920 [2024-11-25 10:34:43.198990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.920 [2024-11-25 10:34:43.199001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:48.920 [2024-11-25 10:34:43.199012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:48.920 [2024-11-25 10:34:43.199023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:48.920 [2024-11-25 10:34:43.199034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:48.920 [2024-11-25 10:34:43.199045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:48.920 [2024-11-25 10:34:43.199055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:48.920 [2024-11-25 10:34:43.199068] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:48.920 [2024-11-25 10:34:43.199083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.920 [2024-11-25 10:34:43.199096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:48.920 [2024-11-25 10:34:43.199108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:48.920 [2024-11-25 10:34:43.199119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:48.920 [2024-11-25 10:34:43.199130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:48.920 [2024-11-25 10:34:43.199142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:48.920 [2024-11-25 10:34:43.199153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:48.920 [2024-11-25 10:34:43.199164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:48.920 [2024-11-25 10:34:43.199175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:48.920 [2024-11-25 10:34:43.199186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:48.920 [2024-11-25 10:34:43.199198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:48.920 [2024-11-25 10:34:43.199209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:48.920 [2024-11-25 10:34:43.199220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:48.920 [2024-11-25 10:34:43.199231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:48.920 [2024-11-25 10:34:43.199242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:48.920 [2024-11-25 10:34:43.199254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:48.920 [2024-11-25 10:34:43.199273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.920 [2024-11-25 10:34:43.199286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:48.920 [2024-11-25 10:34:43.199298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:48.920 [2024-11-25 10:34:43.199310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:48.920 [2024-11-25 10:34:43.199322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:48.920 [2024-11-25 10:34:43.199335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.920 [2024-11-25 10:34:43.199348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:48.920 [2024-11-25 10:34:43.199360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:28:48.920 [2024-11-25 10:34:43.199371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.920 [2024-11-25 10:34:43.238732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.920 [2024-11-25 10:34:43.238806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:48.920 [2024-11-25 10:34:43.238828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.292 ms 00:28:48.920 [2024-11-25 10:34:43.238841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.920 [2024-11-25 10:34:43.238970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.920 [2024-11-25 10:34:43.238986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:48.920 [2024-11-25 10:34:43.238999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:48.920 [2024-11-25 10:34:43.239010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.179 [2024-11-25 10:34:43.289811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.179 [2024-11-25 10:34:43.289877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:49.179 [2024-11-25 10:34:43.289899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.708 ms 00:28:49.179 [2024-11-25 10:34:43.289911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.179 [2024-11-25 10:34:43.289991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.179 [2024-11-25 10:34:43.290008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:49.179 [2024-11-25 10:34:43.290023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:49.179 [2024-11-25 10:34:43.290042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.179 [2024-11-25 10:34:43.290697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.179 [2024-11-25 10:34:43.290727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:49.179 [2024-11-25 10:34:43.290753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:28:49.179 [2024-11-25 10:34:43.290765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.179 [2024-11-25 10:34:43.290956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.179 [2024-11-25 10:34:43.290976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:49.179 [2024-11-25 10:34:43.290989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:28:49.179 [2024-11-25 10:34:43.291009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.179 [2024-11-25 10:34:43.310426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.179 [2024-11-25 10:34:43.310474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:49.179 [2024-11-25 10:34:43.310498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.389 ms 00:28:49.179 [2024-11-25 10:34:43.310511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.179 [2024-11-25 10:34:43.327101] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:49.179 [2024-11-25 10:34:43.327146] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:49.179 [2024-11-25 10:34:43.327165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.179 [2024-11-25 10:34:43.327178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:49.179 [2024-11-25 10:34:43.327191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.503 ms 00:28:49.179 [2024-11-25 10:34:43.327203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.179 [2024-11-25 10:34:43.356440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.179 [2024-11-25 10:34:43.356491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:49.179 [2024-11-25 10:34:43.356509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.136 ms 00:28:49.179 [2024-11-25 10:34:43.356522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.179 [2024-11-25 10:34:43.371839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.179 [2024-11-25 10:34:43.371883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:49.179 [2024-11-25 10:34:43.371900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.267 ms 00:28:49.180 [2024-11-25 10:34:43.371912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.180 [2024-11-25 10:34:43.386926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.180 [2024-11-25 10:34:43.386968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:49.180 [2024-11-25 10:34:43.386985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.970 ms 00:28:49.180 [2024-11-25 10:34:43.386997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.180 [2024-11-25 10:34:43.387886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.180 [2024-11-25 10:34:43.387918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:49.180 [2024-11-25 10:34:43.387934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:28:49.180 [2024-11-25 10:34:43.387951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.180 [2024-11-25 10:34:43.465913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.180 [2024-11-25 10:34:43.465975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:49.180 [2024-11-25 10:34:43.466002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.935 ms 00:28:49.180 [2024-11-25 10:34:43.466024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.180 [2024-11-25 10:34:43.478747] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:49.180 [2024-11-25 10:34:43.482619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.180 [2024-11-25 10:34:43.482656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:49.180 [2024-11-25 10:34:43.482676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.519 ms 00:28:49.180 [2024-11-25 10:34:43.482690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.180 [2024-11-25 10:34:43.482832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.180 [2024-11-25 10:34:43.482853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:49.180 [2024-11-25 10:34:43.482867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:49.180 [2024-11-25 10:34:43.482884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.180 [2024-11-25 10:34:43.482996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.180 [2024-11-25 10:34:43.483024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:49.180 [2024-11-25 10:34:43.483039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:49.180 [2024-11-25 10:34:43.483050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.180 [2024-11-25 10:34:43.483083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.180 [2024-11-25 10:34:43.483099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:49.180 [2024-11-25 10:34:43.483112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:49.180 [2024-11-25 10:34:43.483123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.180 [2024-11-25 10:34:43.483169] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:49.180 [2024-11-25 10:34:43.483189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.180 [2024-11-25 10:34:43.483202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:49.180 [2024-11-25 10:34:43.483214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:49.180 [2024-11-25 10:34:43.483226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.438 [2024-11-25 10:34:43.514137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.438 [2024-11-25 10:34:43.514185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:49.438 [2024-11-25 10:34:43.514204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.882 ms 00:28:49.438 [2024-11-25 10:34:43.514223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.438 [2024-11-25 10:34:43.514316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.438 [2024-11-25 10:34:43.514335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:49.438 [2024-11-25 10:34:43.514348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:49.438 [2024-11-25 10:34:43.514359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.438 [2024-11-25 10:34:43.515794] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.513 ms, result 0 00:28:50.892  [2024-11-25T10:34:45.809Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-25T10:34:46.744Z] Copying: 54/1024 [MB] (27 MBps) [2024-11-25T10:34:48.120Z] Copying: 81/1024 [MB] (27 MBps) [2024-11-25T10:34:49.054Z] Copying: 108/1024 [MB] (26 MBps) [2024-11-25T10:34:49.988Z] Copying: 134/1024 [MB] (26 MBps) [2024-11-25T10:34:50.922Z] Copying: 160/1024 [MB] (26 MBps) [2024-11-25T10:34:51.871Z] Copying: 187/1024 [MB] (26 MBps) [2024-11-25T10:34:52.829Z] Copying: 214/1024 [MB] (27 MBps) [2024-11-25T10:34:53.762Z] Copying: 240/1024 [MB] (26 MBps) [2024-11-25T10:34:55.137Z] Copying: 266/1024 [MB] (25 MBps) [2024-11-25T10:34:56.074Z] Copying: 293/1024 [MB] (26 MBps) [2024-11-25T10:34:57.011Z] Copying: 320/1024 [MB] (26 MBps) [2024-11-25T10:34:57.983Z] Copying: 346/1024 [MB] (26 MBps) [2024-11-25T10:34:58.917Z] Copying: 372/1024 [MB] (26 MBps) [2024-11-25T10:34:59.852Z] Copying: 397/1024 [MB] (24 MBps) [2024-11-25T10:35:00.839Z] Copying: 423/1024 [MB] (26 MBps) [2024-11-25T10:35:01.779Z] Copying: 449/1024 [MB] (25 MBps) [2024-11-25T10:35:03.153Z] Copying: 475/1024 [MB] (25 MBps) [2024-11-25T10:35:04.088Z] Copying: 500/1024 [MB] (25 MBps) [2024-11-25T10:35:05.045Z] Copying: 526/1024 [MB] (25 MBps) [2024-11-25T10:35:05.979Z] Copying: 552/1024 [MB] (25 MBps) [2024-11-25T10:35:06.913Z] Copying: 578/1024 [MB] (25 MBps) [2024-11-25T10:35:07.847Z] Copying: 604/1024 [MB] (25 MBps) [2024-11-25T10:35:08.780Z] Copying: 630/1024 [MB] (25 MBps) [2024-11-25T10:35:09.775Z] Copying: 656/1024 [MB] (26 MBps) [2024-11-25T10:35:11.144Z] Copying: 682/1024 [MB] (26 MBps) [2024-11-25T10:35:12.077Z] Copying: 708/1024 [MB] (26 MBps) [2024-11-25T10:35:13.013Z] Copying: 734/1024 [MB] (25 MBps) [2024-11-25T10:35:13.949Z] Copying: 760/1024 [MB] (25 MBps) [2024-11-25T10:35:14.884Z] Copying: 786/1024 [MB] (26 MBps) [2024-11-25T10:35:15.819Z] Copying: 813/1024 [MB] (26 MBps) [2024-11-25T10:35:16.769Z] Copying: 839/1024 [MB] (26 MBps) [2024-11-25T10:35:18.145Z] Copying: 866/1024 [MB] (26 MBps) [2024-11-25T10:35:19.081Z] Copying: 893/1024 [MB] (26 MBps) [2024-11-25T10:35:20.013Z] Copying: 920/1024 [MB] (26 MBps) [2024-11-25T10:35:20.946Z] Copying: 945/1024 [MB] (25 MBps) [2024-11-25T10:35:21.880Z] Copying: 970/1024 [MB] (25 MBps) [2024-11-25T10:35:22.813Z] Copying: 996/1024 [MB] (26 MBps) [2024-11-25T10:35:22.814Z] Copying: 1023/1024 [MB] (26 MBps) [2024-11-25T10:35:23.381Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-25 10:35:23.202804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.202893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:29.048 [2024-11-25 10:35:23.202933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:29.048 [2024-11-25 10:35:23.202946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.202982] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:29.048 [2024-11-25 10:35:23.207133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.207173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:29.048 [2024-11-25 10:35:23.207198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.127 ms 00:29:29.048 [2024-11-25 10:35:23.207210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.207477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.207504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:29.048 [2024-11-25 10:35:23.207519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:29:29.048 [2024-11-25 10:35:23.207531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.211379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.211415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:29.048 [2024-11-25 10:35:23.211431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.827 ms 00:29:29.048 [2024-11-25 10:35:23.211443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.218386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.218427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:29.048 [2024-11-25 10:35:23.218444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.912 ms 00:29:29.048 [2024-11-25 10:35:23.218455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.251780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.251841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:29.048 [2024-11-25 10:35:23.251861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.229 ms 00:29:29.048 [2024-11-25 10:35:23.251874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.270999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.271059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:29.048 [2024-11-25 10:35:23.271080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.068 ms 00:29:29.048 [2024-11-25 10:35:23.271094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.271268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.271299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:29.048 [2024-11-25 10:35:23.271315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:29:29.048 [2024-11-25 10:35:23.271326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.302684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.302726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:29.048 [2024-11-25 10:35:23.302745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.334 ms 00:29:29.048 [2024-11-25 10:35:23.302757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.333016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.333073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:29.048 [2024-11-25 10:35:23.333092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.156 ms 00:29:29.048 [2024-11-25 10:35:23.333104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.048 [2024-11-25 10:35:23.362946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.048 [2024-11-25 10:35:23.362996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:29.048 [2024-11-25 10:35:23.363014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.795 ms 00:29:29.049 [2024-11-25 10:35:23.363026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.308 [2024-11-25 10:35:23.392893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.308 [2024-11-25 10:35:23.392935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:29.308 [2024-11-25 10:35:23.392953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.704 ms 00:29:29.308 [2024-11-25 10:35:23.392965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.308 [2024-11-25 10:35:23.393011] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:29.308 [2024-11-25 10:35:23.393035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:29.308 [2024-11-25 10:35:23.393508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.393990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-11-25 10:35:23.394291] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:29.309 [2024-11-25 10:35:23.394309] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4a74b76-6e08-405f-b94b-34432b8ef08f 00:29:29.309 [2024-11-25 10:35:23.394321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:29.309 [2024-11-25 10:35:23.394333] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:29.309 [2024-11-25 10:35:23.394344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:29.309 [2024-11-25 10:35:23.394355] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:29.309 [2024-11-25 10:35:23.394366] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:29.309 [2024-11-25 10:35:23.394378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:29.309 [2024-11-25 10:35:23.394408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:29.309 [2024-11-25 10:35:23.394420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:29.309 [2024-11-25 10:35:23.394440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:29.309 [2024-11-25 10:35:23.394453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-11-25 10:35:23.394465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:29.309 [2024-11-25 10:35:23.394478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:29:29.309 [2024-11-25 10:35:23.394489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-11-25 10:35:23.411416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-11-25 10:35:23.411461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:29.309 [2024-11-25 10:35:23.411479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.874 ms 00:29:29.309 [2024-11-25 10:35:23.411492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-11-25 10:35:23.411991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-11-25 10:35:23.412021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:29.309 [2024-11-25 10:35:23.412037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:29:29.309 [2024-11-25 10:35:23.412056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-11-25 10:35:23.456439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.309 [2024-11-25 10:35:23.456500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:29.309 [2024-11-25 10:35:23.456519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.309 [2024-11-25 10:35:23.456532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-11-25 10:35:23.456610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.309 [2024-11-25 10:35:23.456627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:29.309 [2024-11-25 10:35:23.456640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.309 [2024-11-25 10:35:23.456660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-11-25 10:35:23.456754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.309 [2024-11-25 10:35:23.456800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:29.309 [2024-11-25 10:35:23.456817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.309 [2024-11-25 10:35:23.456828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-11-25 10:35:23.456854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.310 [2024-11-25 10:35:23.456868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:29.310 [2024-11-25 10:35:23.456880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.310 [2024-11-25 10:35:23.456902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.310 [2024-11-25 10:35:23.566556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.310 [2024-11-25 10:35:23.566619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:29.310 [2024-11-25 10:35:23.566640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.310 [2024-11-25 10:35:23.566653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.568 [2024-11-25 10:35:23.655764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.568 [2024-11-25 10:35:23.655871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:29.568 [2024-11-25 10:35:23.655892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.568 [2024-11-25 10:35:23.655905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.568 [2024-11-25 10:35:23.656034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.568 [2024-11-25 10:35:23.656056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:29.568 [2024-11-25 10:35:23.656070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.568 [2024-11-25 10:35:23.656082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.568 [2024-11-25 10:35:23.656132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.568 [2024-11-25 10:35:23.656149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:29.568 [2024-11-25 10:35:23.656162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.568 [2024-11-25 10:35:23.656181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.568 [2024-11-25 10:35:23.656315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.568 [2024-11-25 10:35:23.656347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:29.568 [2024-11-25 10:35:23.656362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.568 [2024-11-25 10:35:23.656373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.568 [2024-11-25 10:35:23.656427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.568 [2024-11-25 10:35:23.656446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:29.568 [2024-11-25 10:35:23.656459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.568 [2024-11-25 10:35:23.656471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.568 [2024-11-25 10:35:23.656519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.568 [2024-11-25 10:35:23.656542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:29.568 [2024-11-25 10:35:23.656554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.568 [2024-11-25 10:35:23.656566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.568 [2024-11-25 10:35:23.656620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.568 [2024-11-25 10:35:23.656649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:29.568 [2024-11-25 10:35:23.656663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.568 [2024-11-25 10:35:23.656674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.568 [2024-11-25 10:35:23.656849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 454.034 ms, result 0 00:29:30.502 00:29:30.502 00:29:30.502 10:35:24 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:33.031 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:33.031 10:35:26 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:33.031 [2024-11-25 10:35:27.011560] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:29:33.031 [2024-11-25 10:35:27.011941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80374 ] 00:29:33.031 [2024-11-25 10:35:27.191889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.031 [2024-11-25 10:35:27.345657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.604 [2024-11-25 10:35:27.723775] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:33.604 [2024-11-25 10:35:27.723894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:33.604 [2024-11-25 10:35:27.890008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.890086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:33.604 [2024-11-25 10:35:27.890133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:33.604 [2024-11-25 10:35:27.890146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.890211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.890230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:33.604 [2024-11-25 10:35:27.890249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:33.604 [2024-11-25 10:35:27.890260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.890290] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:33.604 [2024-11-25 10:35:27.891295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:33.604 [2024-11-25 10:35:27.891346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.891360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:33.604 [2024-11-25 10:35:27.891374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:29:33.604 [2024-11-25 10:35:27.891385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.893422] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:33.604 [2024-11-25 10:35:27.911330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.911410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:33.604 [2024-11-25 10:35:27.911445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.926 ms 00:29:33.604 [2024-11-25 10:35:27.911475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.911553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.911574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:33.604 [2024-11-25 10:35:27.911587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:33.604 [2024-11-25 10:35:27.911599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.921210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.921258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:33.604 [2024-11-25 10:35:27.921277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.516 ms 00:29:33.604 [2024-11-25 10:35:27.921290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.921405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.921425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:33.604 [2024-11-25 10:35:27.921438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:33.604 [2024-11-25 10:35:27.921450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.921516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.921535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:33.604 [2024-11-25 10:35:27.921548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:33.604 [2024-11-25 10:35:27.921559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.921597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:33.604 [2024-11-25 10:35:27.926796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.926839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:33.604 [2024-11-25 10:35:27.926856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.197 ms 00:29:33.604 [2024-11-25 10:35:27.926874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.926938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.604 [2024-11-25 10:35:27.926956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:33.604 [2024-11-25 10:35:27.926969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:33.604 [2024-11-25 10:35:27.926980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.604 [2024-11-25 10:35:27.927031] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:33.604 [2024-11-25 10:35:27.927064] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:33.604 [2024-11-25 10:35:27.927107] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:33.604 [2024-11-25 10:35:27.927133] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:33.604 [2024-11-25 10:35:27.927245] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:33.604 [2024-11-25 10:35:27.927261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:33.604 [2024-11-25 10:35:27.927278] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:33.605 [2024-11-25 10:35:27.927294] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:33.605 [2024-11-25 10:35:27.927308] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:33.605 [2024-11-25 10:35:27.927321] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:33.605 [2024-11-25 10:35:27.927332] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:33.605 [2024-11-25 10:35:27.927343] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:33.605 [2024-11-25 10:35:27.927355] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:33.605 [2024-11-25 10:35:27.927372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.605 [2024-11-25 10:35:27.927384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:33.605 [2024-11-25 10:35:27.927397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:29:33.605 [2024-11-25 10:35:27.927408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.605 [2024-11-25 10:35:27.927507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.605 [2024-11-25 10:35:27.927522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:33.605 [2024-11-25 10:35:27.927535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:33.605 [2024-11-25 10:35:27.927553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.605 [2024-11-25 10:35:27.927717] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:33.605 [2024-11-25 10:35:27.927784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:33.605 [2024-11-25 10:35:27.927810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.605 [2024-11-25 10:35:27.927830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.605 [2024-11-25 10:35:27.927849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:33.605 [2024-11-25 10:35:27.927868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:33.605 [2024-11-25 10:35:27.927888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:33.605 [2024-11-25 10:35:27.927909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:33.605 [2024-11-25 10:35:27.927927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:33.605 [2024-11-25 10:35:27.927940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.605 [2024-11-25 10:35:27.927951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:33.605 [2024-11-25 10:35:27.927962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:33.605 [2024-11-25 10:35:27.927973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.605 [2024-11-25 10:35:27.927984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:33.605 [2024-11-25 10:35:27.927995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:33.605 [2024-11-25 10:35:27.928019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:33.605 [2024-11-25 10:35:27.928043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:33.605 [2024-11-25 10:35:27.928053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:33.605 [2024-11-25 10:35:27.928076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.605 [2024-11-25 10:35:27.928098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:33.605 [2024-11-25 10:35:27.928109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.605 [2024-11-25 10:35:27.928130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:33.605 [2024-11-25 10:35:27.928140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.605 [2024-11-25 10:35:27.928162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:33.605 [2024-11-25 10:35:27.928173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.605 [2024-11-25 10:35:27.928195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:33.605 [2024-11-25 10:35:27.928206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.605 [2024-11-25 10:35:27.928228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:33.605 [2024-11-25 10:35:27.928239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:33.605 [2024-11-25 10:35:27.928249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.605 [2024-11-25 10:35:27.928260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:33.605 [2024-11-25 10:35:27.928272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:33.605 [2024-11-25 10:35:27.928283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:33.605 [2024-11-25 10:35:27.928305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:33.605 [2024-11-25 10:35:27.928315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928325] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:33.605 [2024-11-25 10:35:27.928337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:33.605 [2024-11-25 10:35:27.928349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.605 [2024-11-25 10:35:27.928360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.605 [2024-11-25 10:35:27.928372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:33.605 [2024-11-25 10:35:27.928383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:33.605 [2024-11-25 10:35:27.928394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:33.605 [2024-11-25 10:35:27.928405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:33.605 [2024-11-25 10:35:27.928415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:33.605 [2024-11-25 10:35:27.928426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:33.605 [2024-11-25 10:35:27.928438] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:33.605 [2024-11-25 10:35:27.928453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.605 [2024-11-25 10:35:27.928466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:33.605 [2024-11-25 10:35:27.928478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:33.605 [2024-11-25 10:35:27.928490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:33.605 [2024-11-25 10:35:27.928502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:33.605 [2024-11-25 10:35:27.928514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:33.605 [2024-11-25 10:35:27.928525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:33.605 [2024-11-25 10:35:27.928537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:33.605 [2024-11-25 10:35:27.928548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:33.605 [2024-11-25 10:35:27.928560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:33.605 [2024-11-25 10:35:27.928571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:33.605 [2024-11-25 10:35:27.928583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:33.605 [2024-11-25 10:35:27.928594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:33.605 [2024-11-25 10:35:27.928606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:33.605 [2024-11-25 10:35:27.928618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:33.605 [2024-11-25 10:35:27.928630] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:33.605 [2024-11-25 10:35:27.928652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.605 [2024-11-25 10:35:27.928666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:33.605 [2024-11-25 10:35:27.928678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:33.605 [2024-11-25 10:35:27.928690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:33.605 [2024-11-25 10:35:27.928702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:33.605 [2024-11-25 10:35:27.928716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.605 [2024-11-25 10:35:27.928729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:33.605 [2024-11-25 10:35:27.928741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:29:33.605 [2024-11-25 10:35:27.928752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:27.970777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:27.970851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:33.865 [2024-11-25 10:35:27.970878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.922 ms 00:29:33.865 [2024-11-25 10:35:27.970892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:27.971020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:27.971036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:33.865 [2024-11-25 10:35:27.971050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:33.865 [2024-11-25 10:35:27.971062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.032077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.032153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:33.865 [2024-11-25 10:35:28.032190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.918 ms 00:29:33.865 [2024-11-25 10:35:28.032204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.032280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.032298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:33.865 [2024-11-25 10:35:28.032312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:33.865 [2024-11-25 10:35:28.032338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.033023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.033053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:33.865 [2024-11-25 10:35:28.033069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:29:33.865 [2024-11-25 10:35:28.033082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.033264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.033285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:33.865 [2024-11-25 10:35:28.033298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:29:33.865 [2024-11-25 10:35:28.033323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.054184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.054242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:33.865 [2024-11-25 10:35:28.054282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.831 ms 00:29:33.865 [2024-11-25 10:35:28.054294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.072194] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:33.865 [2024-11-25 10:35:28.072254] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:33.865 [2024-11-25 10:35:28.072290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.072318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:33.865 [2024-11-25 10:35:28.072346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.808 ms 00:29:33.865 [2024-11-25 10:35:28.072358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.103015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.103077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:33.865 [2024-11-25 10:35:28.103111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.611 ms 00:29:33.865 [2024-11-25 10:35:28.103123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.119416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.119505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:33.865 [2024-11-25 10:35:28.119539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.246 ms 00:29:33.865 [2024-11-25 10:35:28.119551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.135779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.135830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:33.865 [2024-11-25 10:35:28.135848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.156 ms 00:29:33.865 [2024-11-25 10:35:28.135860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.865 [2024-11-25 10:35:28.136715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.865 [2024-11-25 10:35:28.136795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:33.865 [2024-11-25 10:35:28.136814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:29:33.865 [2024-11-25 10:35:28.136831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.124 [2024-11-25 10:35:28.219507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.124 [2024-11-25 10:35:28.219620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:34.124 [2024-11-25 10:35:28.219667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.642 ms 00:29:34.124 [2024-11-25 10:35:28.219680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.124 [2024-11-25 10:35:28.233494] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:34.124 [2024-11-25 10:35:28.237753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.124 [2024-11-25 10:35:28.237798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:34.124 [2024-11-25 10:35:28.237819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.962 ms 00:29:34.124 [2024-11-25 10:35:28.237831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.125 [2024-11-25 10:35:28.237969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.125 [2024-11-25 10:35:28.237989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:34.125 [2024-11-25 10:35:28.238004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:34.125 [2024-11-25 10:35:28.238020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.125 [2024-11-25 10:35:28.238150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.125 [2024-11-25 10:35:28.238179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:34.125 [2024-11-25 10:35:28.238194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:34.125 [2024-11-25 10:35:28.238206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.125 [2024-11-25 10:35:28.238239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.125 [2024-11-25 10:35:28.238255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:34.125 [2024-11-25 10:35:28.238268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:34.125 [2024-11-25 10:35:28.238279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.125 [2024-11-25 10:35:28.238326] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:34.125 [2024-11-25 10:35:28.238347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.125 [2024-11-25 10:35:28.238359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:34.125 [2024-11-25 10:35:28.238371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:34.125 [2024-11-25 10:35:28.238383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.125 [2024-11-25 10:35:28.271234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.125 [2024-11-25 10:35:28.271283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:34.125 [2024-11-25 10:35:28.271302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.825 ms 00:29:34.125 [2024-11-25 10:35:28.271323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.125 [2024-11-25 10:35:28.271416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.125 [2024-11-25 10:35:28.271436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:34.125 [2024-11-25 10:35:28.271450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:34.125 [2024-11-25 10:35:28.271462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.125 [2024-11-25 10:35:28.272763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 382.233 ms, result 0 00:29:35.065  [2024-11-25T10:35:30.332Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-25T10:35:31.707Z] Copying: 51/1024 [MB] (25 MBps) [2024-11-25T10:35:32.641Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-25T10:35:33.577Z] Copying: 104/1024 [MB] (26 MBps) [2024-11-25T10:35:34.512Z] Copying: 131/1024 [MB] (27 MBps) [2024-11-25T10:35:35.446Z] Copying: 157/1024 [MB] (26 MBps) [2024-11-25T10:35:36.381Z] Copying: 184/1024 [MB] (27 MBps) [2024-11-25T10:35:37.317Z] Copying: 211/1024 [MB] (26 MBps) [2024-11-25T10:35:38.691Z] Copying: 238/1024 [MB] (26 MBps) [2024-11-25T10:35:39.625Z] Copying: 265/1024 [MB] (27 MBps) [2024-11-25T10:35:40.559Z] Copying: 292/1024 [MB] (27 MBps) [2024-11-25T10:35:41.493Z] Copying: 319/1024 [MB] (27 MBps) [2024-11-25T10:35:42.428Z] Copying: 346/1024 [MB] (26 MBps) [2024-11-25T10:35:43.363Z] Copying: 374/1024 [MB] (27 MBps) [2024-11-25T10:35:44.300Z] Copying: 401/1024 [MB] (26 MBps) [2024-11-25T10:35:45.287Z] Copying: 427/1024 [MB] (26 MBps) [2024-11-25T10:35:46.663Z] Copying: 455/1024 [MB] (28 MBps) [2024-11-25T10:35:47.599Z] Copying: 482/1024 [MB] (27 MBps) [2024-11-25T10:35:48.535Z] Copying: 510/1024 [MB] (28 MBps) [2024-11-25T10:35:49.470Z] Copying: 537/1024 [MB] (26 MBps) [2024-11-25T10:35:50.406Z] Copying: 565/1024 [MB] (28 MBps) [2024-11-25T10:35:51.347Z] Copying: 592/1024 [MB] (26 MBps) [2024-11-25T10:35:52.722Z] Copying: 617/1024 [MB] (25 MBps) [2024-11-25T10:35:53.288Z] Copying: 644/1024 [MB] (26 MBps) [2024-11-25T10:35:54.661Z] Copying: 673/1024 [MB] (29 MBps) [2024-11-25T10:35:55.597Z] Copying: 700/1024 [MB] (27 MBps) [2024-11-25T10:35:56.535Z] Copying: 728/1024 [MB] (27 MBps) [2024-11-25T10:35:57.469Z] Copying: 755/1024 [MB] (27 MBps) [2024-11-25T10:35:58.401Z] Copying: 783/1024 [MB] (28 MBps) [2024-11-25T10:35:59.337Z] Copying: 811/1024 [MB] (27 MBps) [2024-11-25T10:36:00.712Z] Copying: 837/1024 [MB] (26 MBps) [2024-11-25T10:36:01.646Z] Copying: 866/1024 [MB] (28 MBps) [2024-11-25T10:36:02.583Z] Copying: 893/1024 [MB] (27 MBps) [2024-11-25T10:36:03.520Z] Copying: 918/1024 [MB] (25 MBps) [2024-11-25T10:36:04.456Z] Copying: 943/1024 [MB] (24 MBps) [2024-11-25T10:36:05.392Z] Copying: 967/1024 [MB] (23 MBps) [2024-11-25T10:36:06.329Z] Copying: 993/1024 [MB] (25 MBps) [2024-11-25T10:36:07.703Z] Copying: 1019/1024 [MB] (26 MBps) [2024-11-25T10:36:07.703Z] Copying: 1048376/1048576 [kB] (4096 kBps) [2024-11-25T10:36:07.703Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-25 10:36:07.520495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.370 [2024-11-25 10:36:07.520604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:13.370 [2024-11-25 10:36:07.520628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:13.370 [2024-11-25 10:36:07.520653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.370 [2024-11-25 10:36:07.522891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:13.370 [2024-11-25 10:36:07.527684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.370 [2024-11-25 10:36:07.527727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:13.370 [2024-11-25 10:36:07.527746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:30:13.370 [2024-11-25 10:36:07.527758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.370 [2024-11-25 10:36:07.541238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.370 [2024-11-25 10:36:07.541302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:13.370 [2024-11-25 10:36:07.541351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.240 ms 00:30:13.370 [2024-11-25 10:36:07.541365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.370 [2024-11-25 10:36:07.562087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.370 [2024-11-25 10:36:07.562130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:13.370 [2024-11-25 10:36:07.562148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.674 ms 00:30:13.370 [2024-11-25 10:36:07.562161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.370 [2024-11-25 10:36:07.568803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.370 [2024-11-25 10:36:07.568860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:13.370 [2024-11-25 10:36:07.568893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.602 ms 00:30:13.370 [2024-11-25 10:36:07.568905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.370 [2024-11-25 10:36:07.600529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.370 [2024-11-25 10:36:07.600610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:13.370 [2024-11-25 10:36:07.600643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.534 ms 00:30:13.370 [2024-11-25 10:36:07.600655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.370 [2024-11-25 10:36:07.618291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.370 [2024-11-25 10:36:07.618367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:13.370 [2024-11-25 10:36:07.618385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.583 ms 00:30:13.370 [2024-11-25 10:36:07.618398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.630 [2024-11-25 10:36:07.716955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.630 [2024-11-25 10:36:07.717014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:13.630 [2024-11-25 10:36:07.717033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.500 ms 00:30:13.630 [2024-11-25 10:36:07.717047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.630 [2024-11-25 10:36:07.749259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.630 [2024-11-25 10:36:07.749348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:13.630 [2024-11-25 10:36:07.749397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.190 ms 00:30:13.630 [2024-11-25 10:36:07.749409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.630 [2024-11-25 10:36:07.779815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.630 [2024-11-25 10:36:07.779899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:13.630 [2024-11-25 10:36:07.779932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.363 ms 00:30:13.630 [2024-11-25 10:36:07.779944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.630 [2024-11-25 10:36:07.810167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.630 [2024-11-25 10:36:07.810239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:13.630 [2024-11-25 10:36:07.810272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.179 ms 00:30:13.630 [2024-11-25 10:36:07.810284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.630 [2024-11-25 10:36:07.840717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.630 [2024-11-25 10:36:07.840800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:13.630 [2024-11-25 10:36:07.840819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.328 ms 00:30:13.630 [2024-11-25 10:36:07.840831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.630 [2024-11-25 10:36:07.840889] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:13.630 [2024-11-25 10:36:07.840912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115456 / 261120 wr_cnt: 1 state: open 00:30:13.630 [2024-11-25 10:36:07.840927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.840940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.840968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.840980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.840992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:13.630 [2024-11-25 10:36:07.841418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.841994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:13.631 [2024-11-25 10:36:07.842186] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:13.631 [2024-11-25 10:36:07.842197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4a74b76-6e08-405f-b94b-34432b8ef08f 00:30:13.631 [2024-11-25 10:36:07.842219] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115456 00:30:13.631 [2024-11-25 10:36:07.842231] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116416 00:30:13.631 [2024-11-25 10:36:07.842241] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115456 00:30:13.631 [2024-11-25 10:36:07.842254] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:30:13.631 [2024-11-25 10:36:07.842265] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:13.631 [2024-11-25 10:36:07.842284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:13.631 [2024-11-25 10:36:07.842308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:13.631 [2024-11-25 10:36:07.842319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:13.631 [2024-11-25 10:36:07.842330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:13.631 [2024-11-25 10:36:07.842341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.631 [2024-11-25 10:36:07.842354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:13.631 [2024-11-25 10:36:07.842366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:30:13.631 [2024-11-25 10:36:07.842377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.631 [2024-11-25 10:36:07.859548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.631 [2024-11-25 10:36:07.859602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:13.631 [2024-11-25 10:36:07.859635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.130 ms 00:30:13.631 [2024-11-25 10:36:07.859655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.631 [2024-11-25 10:36:07.860173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.631 [2024-11-25 10:36:07.860199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:13.631 [2024-11-25 10:36:07.860213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:30:13.631 [2024-11-25 10:36:07.860225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.631 [2024-11-25 10:36:07.904507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.631 [2024-11-25 10:36:07.904609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:13.631 [2024-11-25 10:36:07.904652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.631 [2024-11-25 10:36:07.904664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.631 [2024-11-25 10:36:07.904754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.631 [2024-11-25 10:36:07.904769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:13.631 [2024-11-25 10:36:07.904797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.631 [2024-11-25 10:36:07.904824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.631 [2024-11-25 10:36:07.904947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.631 [2024-11-25 10:36:07.904966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:13.631 [2024-11-25 10:36:07.904980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.631 [2024-11-25 10:36:07.904999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.631 [2024-11-25 10:36:07.905024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.632 [2024-11-25 10:36:07.905038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:13.632 [2024-11-25 10:36:07.905050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.632 [2024-11-25 10:36:07.905061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.891 [2024-11-25 10:36:08.010385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.891 [2024-11-25 10:36:08.010513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:13.891 [2024-11-25 10:36:08.010541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.891 [2024-11-25 10:36:08.010554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.891 [2024-11-25 10:36:08.095506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.891 [2024-11-25 10:36:08.095587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:13.891 [2024-11-25 10:36:08.095622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.891 [2024-11-25 10:36:08.095634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.891 [2024-11-25 10:36:08.095739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.891 [2024-11-25 10:36:08.095756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:13.891 [2024-11-25 10:36:08.095769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.891 [2024-11-25 10:36:08.095781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.891 [2024-11-25 10:36:08.095879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.891 [2024-11-25 10:36:08.095896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:13.891 [2024-11-25 10:36:08.095909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.891 [2024-11-25 10:36:08.095921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.891 [2024-11-25 10:36:08.096199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.891 [2024-11-25 10:36:08.096229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:13.891 [2024-11-25 10:36:08.096245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.891 [2024-11-25 10:36:08.096257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.891 [2024-11-25 10:36:08.096317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.891 [2024-11-25 10:36:08.096335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:13.891 [2024-11-25 10:36:08.096347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.891 [2024-11-25 10:36:08.096390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.891 [2024-11-25 10:36:08.096438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.891 [2024-11-25 10:36:08.096454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:13.891 [2024-11-25 10:36:08.096466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.891 [2024-11-25 10:36:08.096478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.891 [2024-11-25 10:36:08.096538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.891 [2024-11-25 10:36:08.096557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:13.891 [2024-11-25 10:36:08.096570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.891 [2024-11-25 10:36:08.096596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.891 [2024-11-25 10:36:08.096752] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 580.405 ms, result 0 00:30:15.292 00:30:15.292 00:30:15.292 10:36:09 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:15.550 [2024-11-25 10:36:09.689962] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:30:15.550 [2024-11-25 10:36:09.690169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80789 ] 00:30:15.550 [2024-11-25 10:36:09.876032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.809 [2024-11-25 10:36:10.010089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.068 [2024-11-25 10:36:10.372111] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:16.068 [2024-11-25 10:36:10.372220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:16.328 [2024-11-25 10:36:10.536376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.536457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:16.328 [2024-11-25 10:36:10.536516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:16.328 [2024-11-25 10:36:10.536529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.536602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.536621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:16.328 [2024-11-25 10:36:10.536638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:16.328 [2024-11-25 10:36:10.536649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.536678] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:16.328 [2024-11-25 10:36:10.537599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:16.328 [2024-11-25 10:36:10.537635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.537649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:16.328 [2024-11-25 10:36:10.537662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:30:16.328 [2024-11-25 10:36:10.537674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.539620] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:16.328 [2024-11-25 10:36:10.557244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.557305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:16.328 [2024-11-25 10:36:10.557339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.625 ms 00:30:16.328 [2024-11-25 10:36:10.557351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.557427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.557446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:16.328 [2024-11-25 10:36:10.557459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:16.328 [2024-11-25 10:36:10.557469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.566492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.566539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:16.328 [2024-11-25 10:36:10.566555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.898 ms 00:30:16.328 [2024-11-25 10:36:10.566568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.566674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.566693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:16.328 [2024-11-25 10:36:10.566705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:16.328 [2024-11-25 10:36:10.566717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.566801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.566821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:16.328 [2024-11-25 10:36:10.566835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:16.328 [2024-11-25 10:36:10.566846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.566885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:16.328 [2024-11-25 10:36:10.571907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.571959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:16.328 [2024-11-25 10:36:10.571990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.032 ms 00:30:16.328 [2024-11-25 10:36:10.572006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.572049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.572065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:16.328 [2024-11-25 10:36:10.572077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:16.328 [2024-11-25 10:36:10.572088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.572181] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:16.328 [2024-11-25 10:36:10.572213] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:16.328 [2024-11-25 10:36:10.572255] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:16.328 [2024-11-25 10:36:10.572279] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:16.328 [2024-11-25 10:36:10.572390] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:16.328 [2024-11-25 10:36:10.572405] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:16.328 [2024-11-25 10:36:10.572421] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:16.328 [2024-11-25 10:36:10.572447] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:16.328 [2024-11-25 10:36:10.572461] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:16.328 [2024-11-25 10:36:10.572473] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:16.328 [2024-11-25 10:36:10.572484] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:16.328 [2024-11-25 10:36:10.572494] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:16.328 [2024-11-25 10:36:10.572506] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:16.328 [2024-11-25 10:36:10.572522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.572534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:16.328 [2024-11-25 10:36:10.572545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:30:16.328 [2024-11-25 10:36:10.572556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.572653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-11-25 10:36:10.572667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:16.328 [2024-11-25 10:36:10.572680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:16.328 [2024-11-25 10:36:10.572691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-11-25 10:36:10.572809] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:16.328 [2024-11-25 10:36:10.572859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:16.328 [2024-11-25 10:36:10.572874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.328 [2024-11-25 10:36:10.572885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.328 [2024-11-25 10:36:10.572897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:16.328 [2024-11-25 10:36:10.572907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:16.328 [2024-11-25 10:36:10.572921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:16.328 [2024-11-25 10:36:10.572932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:16.328 [2024-11-25 10:36:10.572943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:16.328 [2024-11-25 10:36:10.572954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.328 [2024-11-25 10:36:10.572965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:16.328 [2024-11-25 10:36:10.572975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:16.328 [2024-11-25 10:36:10.572986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.328 [2024-11-25 10:36:10.572996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:16.329 [2024-11-25 10:36:10.573006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:16.329 [2024-11-25 10:36:10.573030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:16.329 [2024-11-25 10:36:10.573051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:16.329 [2024-11-25 10:36:10.573062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:16.329 [2024-11-25 10:36:10.573083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.329 [2024-11-25 10:36:10.573104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:16.329 [2024-11-25 10:36:10.573115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.329 [2024-11-25 10:36:10.573136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:16.329 [2024-11-25 10:36:10.573146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.329 [2024-11-25 10:36:10.573167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:16.329 [2024-11-25 10:36:10.573177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.329 [2024-11-25 10:36:10.573197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:16.329 [2024-11-25 10:36:10.573207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.329 [2024-11-25 10:36:10.573228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:16.329 [2024-11-25 10:36:10.573238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:16.329 [2024-11-25 10:36:10.573248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.329 [2024-11-25 10:36:10.573259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:16.329 [2024-11-25 10:36:10.573270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:16.329 [2024-11-25 10:36:10.573281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:16.329 [2024-11-25 10:36:10.573302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:16.329 [2024-11-25 10:36:10.573312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573322] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:16.329 [2024-11-25 10:36:10.573334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:16.329 [2024-11-25 10:36:10.573345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.329 [2024-11-25 10:36:10.573356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.329 [2024-11-25 10:36:10.573367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:16.329 [2024-11-25 10:36:10.573378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:16.329 [2024-11-25 10:36:10.573389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:16.329 [2024-11-25 10:36:10.573399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:16.329 [2024-11-25 10:36:10.573408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:16.329 [2024-11-25 10:36:10.573418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:16.329 [2024-11-25 10:36:10.573431] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:16.329 [2024-11-25 10:36:10.573445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.329 [2024-11-25 10:36:10.573458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:16.329 [2024-11-25 10:36:10.573470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:16.329 [2024-11-25 10:36:10.573480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:16.329 [2024-11-25 10:36:10.573492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:16.329 [2024-11-25 10:36:10.573503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:16.329 [2024-11-25 10:36:10.573515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:16.329 [2024-11-25 10:36:10.573526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:16.329 [2024-11-25 10:36:10.573537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:16.329 [2024-11-25 10:36:10.573548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:16.329 [2024-11-25 10:36:10.573559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:16.329 [2024-11-25 10:36:10.573570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:16.329 [2024-11-25 10:36:10.573581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:16.329 [2024-11-25 10:36:10.573596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:16.329 [2024-11-25 10:36:10.573607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:16.329 [2024-11-25 10:36:10.573618] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:16.329 [2024-11-25 10:36:10.573638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.329 [2024-11-25 10:36:10.573650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:16.329 [2024-11-25 10:36:10.573662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:16.329 [2024-11-25 10:36:10.573673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:16.329 [2024-11-25 10:36:10.573684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:16.329 [2024-11-25 10:36:10.573697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.329 [2024-11-25 10:36:10.573708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:16.329 [2024-11-25 10:36:10.573720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:30:16.329 [2024-11-25 10:36:10.573731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.329 [2024-11-25 10:36:10.614361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.329 [2024-11-25 10:36:10.614438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:16.329 [2024-11-25 10:36:10.614499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.551 ms 00:30:16.329 [2024-11-25 10:36:10.614513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.329 [2024-11-25 10:36:10.614637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.329 [2024-11-25 10:36:10.614653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:16.329 [2024-11-25 10:36:10.614666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:16.329 [2024-11-25 10:36:10.614678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.676207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.676295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:16.589 [2024-11-25 10:36:10.676332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.434 ms 00:30:16.589 [2024-11-25 10:36:10.676344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.676413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.676429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:16.589 [2024-11-25 10:36:10.676443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:16.589 [2024-11-25 10:36:10.676460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.677119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.677149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:16.589 [2024-11-25 10:36:10.677164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:30:16.589 [2024-11-25 10:36:10.677176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.677362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.677411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:16.589 [2024-11-25 10:36:10.677441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:30:16.589 [2024-11-25 10:36:10.677459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.697864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.697918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:16.589 [2024-11-25 10:36:10.697942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.377 ms 00:30:16.589 [2024-11-25 10:36:10.697954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.715285] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:16.589 [2024-11-25 10:36:10.715341] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:16.589 [2024-11-25 10:36:10.715391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.715403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:16.589 [2024-11-25 10:36:10.715415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.304 ms 00:30:16.589 [2024-11-25 10:36:10.715427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.745197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.745279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:16.589 [2024-11-25 10:36:10.745297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.724 ms 00:30:16.589 [2024-11-25 10:36:10.745309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.760406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.760472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:16.589 [2024-11-25 10:36:10.760504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.048 ms 00:30:16.589 [2024-11-25 10:36:10.760515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.775789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.775855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:16.589 [2024-11-25 10:36:10.775888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.214 ms 00:30:16.589 [2024-11-25 10:36:10.775899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.776805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.776864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:16.589 [2024-11-25 10:36:10.776895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:30:16.589 [2024-11-25 10:36:10.776911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.852612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.852698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:16.589 [2024-11-25 10:36:10.852726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.645 ms 00:30:16.589 [2024-11-25 10:36:10.852739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.865362] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:16.589 [2024-11-25 10:36:10.868551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.868604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:16.589 [2024-11-25 10:36:10.868620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.717 ms 00:30:16.589 [2024-11-25 10:36:10.868633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.868739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.868760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:16.589 [2024-11-25 10:36:10.868788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:16.589 [2024-11-25 10:36:10.868807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.870747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.870802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:16.589 [2024-11-25 10:36:10.870818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.881 ms 00:30:16.589 [2024-11-25 10:36:10.870829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.870867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.870882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:16.589 [2024-11-25 10:36:10.870911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:16.589 [2024-11-25 10:36:10.870925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.870972] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:16.589 [2024-11-25 10:36:10.870993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.871005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:16.589 [2024-11-25 10:36:10.871018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:16.589 [2024-11-25 10:36:10.871029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.902809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.902850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:16.589 [2024-11-25 10:36:10.902868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.751 ms 00:30:16.589 [2024-11-25 10:36:10.902887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.589 [2024-11-25 10:36:10.902982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.589 [2024-11-25 10:36:10.903001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:16.590 [2024-11-25 10:36:10.903014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:16.590 [2024-11-25 10:36:10.903025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.590 [2024-11-25 10:36:10.905905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 368.147 ms, result 0 00:30:17.967  [2024-11-25T10:36:13.236Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-25T10:36:14.171Z] Copying: 47/1024 [MB] (24 MBps) [2024-11-25T10:36:15.134Z] Copying: 69/1024 [MB] (22 MBps) [2024-11-25T10:36:16.148Z] Copying: 92/1024 [MB] (22 MBps) [2024-11-25T10:36:17.523Z] Copying: 118/1024 [MB] (26 MBps) [2024-11-25T10:36:18.458Z] Copying: 143/1024 [MB] (25 MBps) [2024-11-25T10:36:19.391Z] Copying: 169/1024 [MB] (25 MBps) [2024-11-25T10:36:20.325Z] Copying: 195/1024 [MB] (25 MBps) [2024-11-25T10:36:21.259Z] Copying: 221/1024 [MB] (26 MBps) [2024-11-25T10:36:22.194Z] Copying: 247/1024 [MB] (26 MBps) [2024-11-25T10:36:23.655Z] Copying: 273/1024 [MB] (25 MBps) [2024-11-25T10:36:24.250Z] Copying: 298/1024 [MB] (24 MBps) [2024-11-25T10:36:25.186Z] Copying: 324/1024 [MB] (25 MBps) [2024-11-25T10:36:26.566Z] Copying: 351/1024 [MB] (26 MBps) [2024-11-25T10:36:27.132Z] Copying: 377/1024 [MB] (25 MBps) [2024-11-25T10:36:28.505Z] Copying: 403/1024 [MB] (26 MBps) [2024-11-25T10:36:29.442Z] Copying: 430/1024 [MB] (26 MBps) [2024-11-25T10:36:30.377Z] Copying: 453/1024 [MB] (23 MBps) [2024-11-25T10:36:31.312Z] Copying: 478/1024 [MB] (24 MBps) [2024-11-25T10:36:32.249Z] Copying: 502/1024 [MB] (24 MBps) [2024-11-25T10:36:33.184Z] Copying: 526/1024 [MB] (24 MBps) [2024-11-25T10:36:34.570Z] Copying: 552/1024 [MB] (25 MBps) [2024-11-25T10:36:35.154Z] Copying: 577/1024 [MB] (25 MBps) [2024-11-25T10:36:36.529Z] Copying: 602/1024 [MB] (25 MBps) [2024-11-25T10:36:37.463Z] Copying: 628/1024 [MB] (25 MBps) [2024-11-25T10:36:38.399Z] Copying: 651/1024 [MB] (23 MBps) [2024-11-25T10:36:39.339Z] Copying: 676/1024 [MB] (25 MBps) [2024-11-25T10:36:40.282Z] Copying: 702/1024 [MB] (25 MBps) [2024-11-25T10:36:41.218Z] Copying: 727/1024 [MB] (25 MBps) [2024-11-25T10:36:42.154Z] Copying: 752/1024 [MB] (25 MBps) [2024-11-25T10:36:43.533Z] Copying: 777/1024 [MB] (24 MBps) [2024-11-25T10:36:44.469Z] Copying: 802/1024 [MB] (25 MBps) [2024-11-25T10:36:45.405Z] Copying: 828/1024 [MB] (25 MBps) [2024-11-25T10:36:46.341Z] Copying: 854/1024 [MB] (25 MBps) [2024-11-25T10:36:47.278Z] Copying: 879/1024 [MB] (25 MBps) [2024-11-25T10:36:48.221Z] Copying: 905/1024 [MB] (25 MBps) [2024-11-25T10:36:49.157Z] Copying: 930/1024 [MB] (25 MBps) [2024-11-25T10:36:50.537Z] Copying: 956/1024 [MB] (25 MBps) [2024-11-25T10:36:51.471Z] Copying: 980/1024 [MB] (24 MBps) [2024-11-25T10:36:52.063Z] Copying: 1005/1024 [MB] (24 MBps) [2024-11-25T10:36:52.336Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-25 10:36:52.306980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.003 [2024-11-25 10:36:52.307065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:58.003 [2024-11-25 10:36:52.307088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:58.003 [2024-11-25 10:36:52.307101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.003 [2024-11-25 10:36:52.307149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:58.003 [2024-11-25 10:36:52.311359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.003 [2024-11-25 10:36:52.311398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:58.003 [2024-11-25 10:36:52.311413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.187 ms 00:30:58.003 [2024-11-25 10:36:52.311426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.003 [2024-11-25 10:36:52.311680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.003 [2024-11-25 10:36:52.311707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:58.003 [2024-11-25 10:36:52.311722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:30:58.003 [2024-11-25 10:36:52.311733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.003 [2024-11-25 10:36:52.316600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.003 [2024-11-25 10:36:52.316645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:58.003 [2024-11-25 10:36:52.316662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.839 ms 00:30:58.003 [2024-11-25 10:36:52.316674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.003 [2024-11-25 10:36:52.323414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.003 [2024-11-25 10:36:52.323453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:58.003 [2024-11-25 10:36:52.323468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.699 ms 00:30:58.003 [2024-11-25 10:36:52.323480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.262 [2024-11-25 10:36:52.355156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.262 [2024-11-25 10:36:52.355208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:58.262 [2024-11-25 10:36:52.355227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.594 ms 00:30:58.262 [2024-11-25 10:36:52.355239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.262 [2024-11-25 10:36:52.372834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.262 [2024-11-25 10:36:52.372911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:58.262 [2024-11-25 10:36:52.372932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.547 ms 00:30:58.262 [2024-11-25 10:36:52.372944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.262 [2024-11-25 10:36:52.483671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.262 [2024-11-25 10:36:52.483790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:58.262 [2024-11-25 10:36:52.483814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.668 ms 00:30:58.262 [2024-11-25 10:36:52.483830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.262 [2024-11-25 10:36:52.516395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.262 [2024-11-25 10:36:52.516453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:58.262 [2024-11-25 10:36:52.516472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.539 ms 00:30:58.262 [2024-11-25 10:36:52.516484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.262 [2024-11-25 10:36:52.546783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.262 [2024-11-25 10:36:52.546827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:58.262 [2024-11-25 10:36:52.546861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.252 ms 00:30:58.262 [2024-11-25 10:36:52.546873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.262 [2024-11-25 10:36:52.576813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.262 [2024-11-25 10:36:52.576872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:58.262 [2024-11-25 10:36:52.576891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.886 ms 00:30:58.262 [2024-11-25 10:36:52.576903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.521 [2024-11-25 10:36:52.606817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.521 [2024-11-25 10:36:52.606859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:58.521 [2024-11-25 10:36:52.606876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.818 ms 00:30:58.521 [2024-11-25 10:36:52.606888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.521 [2024-11-25 10:36:52.606932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:58.521 [2024-11-25 10:36:52.606956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:30:58.521 [2024-11-25 10:36:52.606971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.606983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.606997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:58.521 [2024-11-25 10:36:52.607163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.607995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:58.522 [2024-11-25 10:36:52.608180] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:58.522 [2024-11-25 10:36:52.608203] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4a74b76-6e08-405f-b94b-34432b8ef08f 00:30:58.522 [2024-11-25 10:36:52.608215] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:30:58.522 [2024-11-25 10:36:52.608226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16576 00:30:58.522 [2024-11-25 10:36:52.608236] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15616 00:30:58.522 [2024-11-25 10:36:52.608249] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0615 00:30:58.523 [2024-11-25 10:36:52.608259] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:58.523 [2024-11-25 10:36:52.608287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:58.523 [2024-11-25 10:36:52.608298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:58.523 [2024-11-25 10:36:52.608320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:58.523 [2024-11-25 10:36:52.608331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:58.523 [2024-11-25 10:36:52.608342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.523 [2024-11-25 10:36:52.608354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:58.523 [2024-11-25 10:36:52.608366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:30:58.523 [2024-11-25 10:36:52.608377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.523 [2024-11-25 10:36:52.625361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.523 [2024-11-25 10:36:52.625402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:58.523 [2024-11-25 10:36:52.625419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.942 ms 00:30:58.523 [2024-11-25 10:36:52.625441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.523 [2024-11-25 10:36:52.625953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.523 [2024-11-25 10:36:52.625982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:58.523 [2024-11-25 10:36:52.625997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:30:58.523 [2024-11-25 10:36:52.626009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.523 [2024-11-25 10:36:52.670356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.523 [2024-11-25 10:36:52.670412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:58.523 [2024-11-25 10:36:52.670436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.523 [2024-11-25 10:36:52.670449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.523 [2024-11-25 10:36:52.670535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.523 [2024-11-25 10:36:52.670552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:58.523 [2024-11-25 10:36:52.670564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.523 [2024-11-25 10:36:52.670575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.523 [2024-11-25 10:36:52.670660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.523 [2024-11-25 10:36:52.670680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:58.523 [2024-11-25 10:36:52.670693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.523 [2024-11-25 10:36:52.670712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.523 [2024-11-25 10:36:52.670736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.523 [2024-11-25 10:36:52.670751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:58.523 [2024-11-25 10:36:52.670764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.523 [2024-11-25 10:36:52.670792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.523 [2024-11-25 10:36:52.779911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.523 [2024-11-25 10:36:52.779986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:58.523 [2024-11-25 10:36:52.780011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.523 [2024-11-25 10:36:52.780024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.781 [2024-11-25 10:36:52.867325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.781 [2024-11-25 10:36:52.867399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:58.781 [2024-11-25 10:36:52.867418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.781 [2024-11-25 10:36:52.867431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.781 [2024-11-25 10:36:52.867539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.781 [2024-11-25 10:36:52.867556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:58.781 [2024-11-25 10:36:52.867570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.781 [2024-11-25 10:36:52.867581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.781 [2024-11-25 10:36:52.867646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.781 [2024-11-25 10:36:52.867667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:58.781 [2024-11-25 10:36:52.867681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.781 [2024-11-25 10:36:52.867693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.781 [2024-11-25 10:36:52.867854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.781 [2024-11-25 10:36:52.867884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:58.781 [2024-11-25 10:36:52.867899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.781 [2024-11-25 10:36:52.867916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.781 [2024-11-25 10:36:52.867974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.781 [2024-11-25 10:36:52.868001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:58.781 [2024-11-25 10:36:52.868015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.781 [2024-11-25 10:36:52.868027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.781 [2024-11-25 10:36:52.868075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.781 [2024-11-25 10:36:52.868091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:58.781 [2024-11-25 10:36:52.868103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.781 [2024-11-25 10:36:52.868114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.781 [2024-11-25 10:36:52.868183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.781 [2024-11-25 10:36:52.868200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:58.781 [2024-11-25 10:36:52.868212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.781 [2024-11-25 10:36:52.868231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.781 [2024-11-25 10:36:52.868404] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 561.388 ms, result 0 00:30:59.717 00:30:59.717 00:30:59.717 10:36:53 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:02.249 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:02.249 10:36:55 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:02.249 10:36:55 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:31:02.249 10:36:55 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:02.249 Process with pid 79232 is not found 00:31:02.249 Remove shared memory files 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79232 00:31:02.249 10:36:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79232 ']' 00:31:02.249 10:36:56 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79232 00:31:02.249 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79232) - No such process 00:31:02.249 10:36:56 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79232 is not found' 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:02.249 10:36:56 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:31:02.249 00:31:02.249 real 3m21.263s 00:31:02.249 user 3m5.589s 00:31:02.249 sys 0m18.104s 00:31:02.249 10:36:56 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:02.249 10:36:56 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:31:02.249 ************************************ 00:31:02.249 END TEST ftl_restore 00:31:02.249 ************************************ 00:31:02.249 10:36:56 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:02.249 10:36:56 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:02.249 10:36:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:02.249 10:36:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:02.249 ************************************ 00:31:02.249 START TEST ftl_dirty_shutdown 00:31:02.249 ************************************ 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:02.249 * Looking for test storage... 00:31:02.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:02.249 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:02.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.250 --rc genhtml_branch_coverage=1 00:31:02.250 --rc genhtml_function_coverage=1 00:31:02.250 --rc genhtml_legend=1 00:31:02.250 --rc geninfo_all_blocks=1 00:31:02.250 --rc geninfo_unexecuted_blocks=1 00:31:02.250 00:31:02.250 ' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:02.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.250 --rc genhtml_branch_coverage=1 00:31:02.250 --rc genhtml_function_coverage=1 00:31:02.250 --rc genhtml_legend=1 00:31:02.250 --rc geninfo_all_blocks=1 00:31:02.250 --rc geninfo_unexecuted_blocks=1 00:31:02.250 00:31:02.250 ' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:02.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.250 --rc genhtml_branch_coverage=1 00:31:02.250 --rc genhtml_function_coverage=1 00:31:02.250 --rc genhtml_legend=1 00:31:02.250 --rc geninfo_all_blocks=1 00:31:02.250 --rc geninfo_unexecuted_blocks=1 00:31:02.250 00:31:02.250 ' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:02.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.250 --rc genhtml_branch_coverage=1 00:31:02.250 --rc genhtml_function_coverage=1 00:31:02.250 --rc genhtml_legend=1 00:31:02.250 --rc geninfo_all_blocks=1 00:31:02.250 --rc geninfo_unexecuted_blocks=1 00:31:02.250 00:31:02.250 ' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81315 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81315 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81315 ']' 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.250 10:36:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:02.509 [2024-11-25 10:36:56.612689] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:31:02.509 [2024-11-25 10:36:56.613178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81315 ] 00:31:02.509 [2024-11-25 10:36:56.793610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.767 [2024-11-25 10:36:56.953316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.703 10:36:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.703 10:36:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:03.703 10:36:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:03.703 10:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:31:03.703 10:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:03.703 10:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:31:03.703 10:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:03.703 10:36:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:03.962 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:03.962 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:03.962 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:03.962 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:03.962 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:03.962 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:03.962 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:03.962 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:04.220 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:04.220 { 00:31:04.220 "name": "nvme0n1", 00:31:04.220 "aliases": [ 00:31:04.220 "1e27c610-37d6-40a6-a477-66ece3179974" 00:31:04.220 ], 00:31:04.220 "product_name": "NVMe disk", 00:31:04.220 "block_size": 4096, 00:31:04.220 "num_blocks": 1310720, 00:31:04.220 "uuid": "1e27c610-37d6-40a6-a477-66ece3179974", 00:31:04.220 "numa_id": -1, 00:31:04.220 "assigned_rate_limits": { 00:31:04.220 "rw_ios_per_sec": 0, 00:31:04.220 "rw_mbytes_per_sec": 0, 00:31:04.220 "r_mbytes_per_sec": 0, 00:31:04.220 "w_mbytes_per_sec": 0 00:31:04.220 }, 00:31:04.220 "claimed": true, 00:31:04.220 "claim_type": "read_many_write_one", 00:31:04.220 "zoned": false, 00:31:04.220 "supported_io_types": { 00:31:04.220 "read": true, 00:31:04.220 "write": true, 00:31:04.220 "unmap": true, 00:31:04.220 "flush": true, 00:31:04.220 "reset": true, 00:31:04.220 "nvme_admin": true, 00:31:04.220 "nvme_io": true, 00:31:04.220 "nvme_io_md": false, 00:31:04.220 "write_zeroes": true, 00:31:04.220 "zcopy": false, 00:31:04.220 "get_zone_info": false, 00:31:04.220 "zone_management": false, 00:31:04.220 "zone_append": false, 00:31:04.220 "compare": true, 00:31:04.220 "compare_and_write": false, 00:31:04.220 "abort": true, 00:31:04.220 "seek_hole": false, 00:31:04.220 "seek_data": false, 00:31:04.220 "copy": true, 00:31:04.220 "nvme_iov_md": false 00:31:04.220 }, 00:31:04.220 "driver_specific": { 00:31:04.220 "nvme": [ 00:31:04.220 { 00:31:04.220 "pci_address": "0000:00:11.0", 00:31:04.220 "trid": { 00:31:04.220 "trtype": "PCIe", 00:31:04.220 "traddr": "0000:00:11.0" 00:31:04.220 }, 00:31:04.220 "ctrlr_data": { 00:31:04.220 "cntlid": 0, 00:31:04.220 "vendor_id": "0x1b36", 00:31:04.220 "model_number": "QEMU NVMe Ctrl", 00:31:04.220 "serial_number": "12341", 00:31:04.220 "firmware_revision": "8.0.0", 00:31:04.220 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:04.220 "oacs": { 00:31:04.220 "security": 0, 00:31:04.220 "format": 1, 00:31:04.220 "firmware": 0, 00:31:04.220 "ns_manage": 1 00:31:04.220 }, 00:31:04.220 "multi_ctrlr": false, 00:31:04.220 "ana_reporting": false 00:31:04.220 }, 00:31:04.220 "vs": { 00:31:04.220 "nvme_version": "1.4" 00:31:04.220 }, 00:31:04.220 "ns_data": { 00:31:04.220 "id": 1, 00:31:04.220 "can_share": false 00:31:04.220 } 00:31:04.220 } 00:31:04.220 ], 00:31:04.220 "mp_policy": "active_passive" 00:31:04.220 } 00:31:04.220 } 00:31:04.220 ]' 00:31:04.220 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:04.220 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:04.220 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:04.479 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:04.479 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:04.479 10:36:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:04.479 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:04.479 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:04.479 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:04.479 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:04.479 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:04.738 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=45b2af95-278b-42b9-a2fe-03a52a784609 00:31:04.738 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:04.738 10:36:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 45b2af95-278b-42b9-a2fe-03a52a784609 00:31:04.996 10:36:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:05.254 10:36:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=b705a355-d535-4e51-9650-a791a8356396 00:31:05.254 10:36:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b705a355-d535-4e51-9650-a791a8356396 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:05.513 10:36:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:05.771 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:05.771 { 00:31:05.771 "name": "1db31c10-0941-4cab-8226-d6ccb87eb2bf", 00:31:05.771 "aliases": [ 00:31:05.771 "lvs/nvme0n1p0" 00:31:05.771 ], 00:31:05.771 "product_name": "Logical Volume", 00:31:05.771 "block_size": 4096, 00:31:05.771 "num_blocks": 26476544, 00:31:05.771 "uuid": "1db31c10-0941-4cab-8226-d6ccb87eb2bf", 00:31:05.771 "assigned_rate_limits": { 00:31:05.771 "rw_ios_per_sec": 0, 00:31:05.771 "rw_mbytes_per_sec": 0, 00:31:05.771 "r_mbytes_per_sec": 0, 00:31:05.771 "w_mbytes_per_sec": 0 00:31:05.771 }, 00:31:05.771 "claimed": false, 00:31:05.771 "zoned": false, 00:31:05.771 "supported_io_types": { 00:31:05.771 "read": true, 00:31:05.771 "write": true, 00:31:05.771 "unmap": true, 00:31:05.771 "flush": false, 00:31:05.771 "reset": true, 00:31:05.771 "nvme_admin": false, 00:31:05.771 "nvme_io": false, 00:31:05.771 "nvme_io_md": false, 00:31:05.771 "write_zeroes": true, 00:31:05.771 "zcopy": false, 00:31:05.771 "get_zone_info": false, 00:31:05.771 "zone_management": false, 00:31:05.771 "zone_append": false, 00:31:05.771 "compare": false, 00:31:05.771 "compare_and_write": false, 00:31:05.771 "abort": false, 00:31:05.771 "seek_hole": true, 00:31:05.771 "seek_data": true, 00:31:05.771 "copy": false, 00:31:05.771 "nvme_iov_md": false 00:31:05.771 }, 00:31:05.771 "driver_specific": { 00:31:05.771 "lvol": { 00:31:05.771 "lvol_store_uuid": "b705a355-d535-4e51-9650-a791a8356396", 00:31:05.771 "base_bdev": "nvme0n1", 00:31:05.771 "thin_provision": true, 00:31:05.771 "num_allocated_clusters": 0, 00:31:05.771 "snapshot": false, 00:31:05.771 "clone": false, 00:31:05.771 "esnap_clone": false 00:31:05.771 } 00:31:05.771 } 00:31:05.771 } 00:31:05.771 ]' 00:31:05.771 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:05.771 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:05.771 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:06.044 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:06.044 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:06.044 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:06.044 10:37:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:31:06.044 10:37:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:06.045 10:37:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:06.318 10:37:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:06.318 10:37:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:06.318 10:37:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:06.318 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:06.318 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:06.318 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:06.318 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:06.318 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:06.576 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:06.576 { 00:31:06.576 "name": "1db31c10-0941-4cab-8226-d6ccb87eb2bf", 00:31:06.576 "aliases": [ 00:31:06.576 "lvs/nvme0n1p0" 00:31:06.576 ], 00:31:06.576 "product_name": "Logical Volume", 00:31:06.576 "block_size": 4096, 00:31:06.576 "num_blocks": 26476544, 00:31:06.576 "uuid": "1db31c10-0941-4cab-8226-d6ccb87eb2bf", 00:31:06.576 "assigned_rate_limits": { 00:31:06.576 "rw_ios_per_sec": 0, 00:31:06.576 "rw_mbytes_per_sec": 0, 00:31:06.576 "r_mbytes_per_sec": 0, 00:31:06.576 "w_mbytes_per_sec": 0 00:31:06.576 }, 00:31:06.576 "claimed": false, 00:31:06.576 "zoned": false, 00:31:06.576 "supported_io_types": { 00:31:06.576 "read": true, 00:31:06.576 "write": true, 00:31:06.576 "unmap": true, 00:31:06.576 "flush": false, 00:31:06.576 "reset": true, 00:31:06.576 "nvme_admin": false, 00:31:06.576 "nvme_io": false, 00:31:06.576 "nvme_io_md": false, 00:31:06.576 "write_zeroes": true, 00:31:06.576 "zcopy": false, 00:31:06.576 "get_zone_info": false, 00:31:06.576 "zone_management": false, 00:31:06.576 "zone_append": false, 00:31:06.576 "compare": false, 00:31:06.576 "compare_and_write": false, 00:31:06.576 "abort": false, 00:31:06.576 "seek_hole": true, 00:31:06.576 "seek_data": true, 00:31:06.576 "copy": false, 00:31:06.576 "nvme_iov_md": false 00:31:06.576 }, 00:31:06.576 "driver_specific": { 00:31:06.576 "lvol": { 00:31:06.576 "lvol_store_uuid": "b705a355-d535-4e51-9650-a791a8356396", 00:31:06.576 "base_bdev": "nvme0n1", 00:31:06.576 "thin_provision": true, 00:31:06.576 "num_allocated_clusters": 0, 00:31:06.576 "snapshot": false, 00:31:06.576 "clone": false, 00:31:06.577 "esnap_clone": false 00:31:06.577 } 00:31:06.577 } 00:31:06.577 } 00:31:06.577 ]' 00:31:06.577 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:06.577 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:06.577 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:06.577 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:06.577 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:06.577 10:37:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:06.577 10:37:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:31:06.577 10:37:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:06.835 10:37:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:31:06.835 10:37:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:06.835 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:06.835 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:06.835 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:06.835 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:06.835 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1db31c10-0941-4cab-8226-d6ccb87eb2bf 00:31:07.093 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:07.093 { 00:31:07.093 "name": "1db31c10-0941-4cab-8226-d6ccb87eb2bf", 00:31:07.093 "aliases": [ 00:31:07.093 "lvs/nvme0n1p0" 00:31:07.093 ], 00:31:07.093 "product_name": "Logical Volume", 00:31:07.093 "block_size": 4096, 00:31:07.093 "num_blocks": 26476544, 00:31:07.093 "uuid": "1db31c10-0941-4cab-8226-d6ccb87eb2bf", 00:31:07.093 "assigned_rate_limits": { 00:31:07.093 "rw_ios_per_sec": 0, 00:31:07.093 "rw_mbytes_per_sec": 0, 00:31:07.093 "r_mbytes_per_sec": 0, 00:31:07.093 "w_mbytes_per_sec": 0 00:31:07.093 }, 00:31:07.093 "claimed": false, 00:31:07.093 "zoned": false, 00:31:07.093 "supported_io_types": { 00:31:07.093 "read": true, 00:31:07.093 "write": true, 00:31:07.093 "unmap": true, 00:31:07.093 "flush": false, 00:31:07.093 "reset": true, 00:31:07.093 "nvme_admin": false, 00:31:07.093 "nvme_io": false, 00:31:07.093 "nvme_io_md": false, 00:31:07.093 "write_zeroes": true, 00:31:07.094 "zcopy": false, 00:31:07.094 "get_zone_info": false, 00:31:07.094 "zone_management": false, 00:31:07.094 "zone_append": false, 00:31:07.094 "compare": false, 00:31:07.094 "compare_and_write": false, 00:31:07.094 "abort": false, 00:31:07.094 "seek_hole": true, 00:31:07.094 "seek_data": true, 00:31:07.094 "copy": false, 00:31:07.094 "nvme_iov_md": false 00:31:07.094 }, 00:31:07.094 "driver_specific": { 00:31:07.094 "lvol": { 00:31:07.094 "lvol_store_uuid": "b705a355-d535-4e51-9650-a791a8356396", 00:31:07.094 "base_bdev": "nvme0n1", 00:31:07.094 "thin_provision": true, 00:31:07.094 "num_allocated_clusters": 0, 00:31:07.094 "snapshot": false, 00:31:07.094 "clone": false, 00:31:07.094 "esnap_clone": false 00:31:07.094 } 00:31:07.094 } 00:31:07.094 } 00:31:07.094 ]' 00:31:07.094 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:07.094 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:07.094 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:07.353 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:07.353 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:07.353 10:37:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:07.353 10:37:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:31:07.353 10:37:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1db31c10-0941-4cab-8226-d6ccb87eb2bf --l2p_dram_limit 10' 00:31:07.353 10:37:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:31:07.353 10:37:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:31:07.353 10:37:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:07.353 10:37:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1db31c10-0941-4cab-8226-d6ccb87eb2bf --l2p_dram_limit 10 -c nvc0n1p0 00:31:07.615 [2024-11-25 10:37:01.704618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.615 [2024-11-25 10:37:01.704688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:07.615 [2024-11-25 10:37:01.704718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:07.615 [2024-11-25 10:37:01.704733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.615 [2024-11-25 10:37:01.704840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.615 [2024-11-25 10:37:01.704871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:07.615 [2024-11-25 10:37:01.704889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:07.615 [2024-11-25 10:37:01.704903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.615 [2024-11-25 10:37:01.704945] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:07.615 [2024-11-25 10:37:01.705981] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:07.615 [2024-11-25 10:37:01.706022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.615 [2024-11-25 10:37:01.706038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:07.615 [2024-11-25 10:37:01.706054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:31:07.615 [2024-11-25 10:37:01.706067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.615 [2024-11-25 10:37:01.706308] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a2e69933-1e21-473c-805a-0a6646f66532 00:31:07.615 [2024-11-25 10:37:01.708183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.615 [2024-11-25 10:37:01.708220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:07.615 [2024-11-25 10:37:01.708237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:07.615 [2024-11-25 10:37:01.708255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.615 [2024-11-25 10:37:01.717887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.615 [2024-11-25 10:37:01.717934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:07.615 [2024-11-25 10:37:01.717954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.574 ms 00:31:07.615 [2024-11-25 10:37:01.717970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.615 [2024-11-25 10:37:01.718130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.615 [2024-11-25 10:37:01.718156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:07.615 [2024-11-25 10:37:01.718171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:31:07.615 [2024-11-25 10:37:01.718191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.615 [2024-11-25 10:37:01.718273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.615 [2024-11-25 10:37:01.718295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:07.615 [2024-11-25 10:37:01.718310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:07.615 [2024-11-25 10:37:01.718329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.615 [2024-11-25 10:37:01.718364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:07.615 [2024-11-25 10:37:01.723581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.615 [2024-11-25 10:37:01.723620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:07.615 [2024-11-25 10:37:01.723642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.223 ms 00:31:07.615 [2024-11-25 10:37:01.723655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.615 [2024-11-25 10:37:01.723706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.615 [2024-11-25 10:37:01.723723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:07.615 [2024-11-25 10:37:01.723739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:07.616 [2024-11-25 10:37:01.723751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.616 [2024-11-25 10:37:01.723828] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:07.616 [2024-11-25 10:37:01.723992] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:07.616 [2024-11-25 10:37:01.724019] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:07.616 [2024-11-25 10:37:01.724037] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:07.616 [2024-11-25 10:37:01.724056] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:07.616 [2024-11-25 10:37:01.724072] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:07.616 [2024-11-25 10:37:01.724088] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:07.616 [2024-11-25 10:37:01.724100] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:07.616 [2024-11-25 10:37:01.724119] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:07.616 [2024-11-25 10:37:01.724131] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:07.616 [2024-11-25 10:37:01.724147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.616 [2024-11-25 10:37:01.724160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:07.616 [2024-11-25 10:37:01.724176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:31:07.616 [2024-11-25 10:37:01.724201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.616 [2024-11-25 10:37:01.724304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.616 [2024-11-25 10:37:01.724320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:07.616 [2024-11-25 10:37:01.724336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:07.616 [2024-11-25 10:37:01.724349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.616 [2024-11-25 10:37:01.724475] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:07.616 [2024-11-25 10:37:01.724495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:07.616 [2024-11-25 10:37:01.724512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:07.616 [2024-11-25 10:37:01.724526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:07.616 [2024-11-25 10:37:01.724553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:07.616 [2024-11-25 10:37:01.724580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:07.616 [2024-11-25 10:37:01.724594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:07.616 [2024-11-25 10:37:01.724620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:07.616 [2024-11-25 10:37:01.724633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:07.616 [2024-11-25 10:37:01.724648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:07.616 [2024-11-25 10:37:01.724660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:07.616 [2024-11-25 10:37:01.724674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:07.616 [2024-11-25 10:37:01.724686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:07.616 [2024-11-25 10:37:01.724715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:07.616 [2024-11-25 10:37:01.724731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:07.616 [2024-11-25 10:37:01.724758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.616 [2024-11-25 10:37:01.724806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:07.616 [2024-11-25 10:37:01.724818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.616 [2024-11-25 10:37:01.724846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:07.616 [2024-11-25 10:37:01.724862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.616 [2024-11-25 10:37:01.724888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:07.616 [2024-11-25 10:37:01.724900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.616 [2024-11-25 10:37:01.724927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:07.616 [2024-11-25 10:37:01.724945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:07.616 [2024-11-25 10:37:01.724957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:07.616 [2024-11-25 10:37:01.724972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:07.616 [2024-11-25 10:37:01.724984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:07.616 [2024-11-25 10:37:01.724999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:07.616 [2024-11-25 10:37:01.725011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:07.616 [2024-11-25 10:37:01.725026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:07.616 [2024-11-25 10:37:01.725037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.616 [2024-11-25 10:37:01.725052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:07.616 [2024-11-25 10:37:01.725065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:07.616 [2024-11-25 10:37:01.725079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.616 [2024-11-25 10:37:01.725091] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:07.616 [2024-11-25 10:37:01.725107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:07.616 [2024-11-25 10:37:01.725120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:07.616 [2024-11-25 10:37:01.725138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.616 [2024-11-25 10:37:01.725152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:07.616 [2024-11-25 10:37:01.725171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:07.616 [2024-11-25 10:37:01.725183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:07.616 [2024-11-25 10:37:01.725198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:07.616 [2024-11-25 10:37:01.725210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:07.616 [2024-11-25 10:37:01.725225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:07.616 [2024-11-25 10:37:01.725243] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:07.616 [2024-11-25 10:37:01.725262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.616 [2024-11-25 10:37:01.725279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:07.616 [2024-11-25 10:37:01.725295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:07.616 [2024-11-25 10:37:01.725308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:07.616 [2024-11-25 10:37:01.725324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:07.616 [2024-11-25 10:37:01.725337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:07.616 [2024-11-25 10:37:01.725352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:07.616 [2024-11-25 10:37:01.725365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:07.616 [2024-11-25 10:37:01.725380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:07.616 [2024-11-25 10:37:01.725393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:07.616 [2024-11-25 10:37:01.725411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:07.616 [2024-11-25 10:37:01.725424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:07.616 [2024-11-25 10:37:01.725438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:07.616 [2024-11-25 10:37:01.725451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:07.616 [2024-11-25 10:37:01.725468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:07.616 [2024-11-25 10:37:01.725482] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:07.616 [2024-11-25 10:37:01.725499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.616 [2024-11-25 10:37:01.725512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:07.616 [2024-11-25 10:37:01.725528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:07.616 [2024-11-25 10:37:01.725541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:07.616 [2024-11-25 10:37:01.725557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:07.616 [2024-11-25 10:37:01.725571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.616 [2024-11-25 10:37:01.725587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:07.616 [2024-11-25 10:37:01.725601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:31:07.616 [2024-11-25 10:37:01.725616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.617 [2024-11-25 10:37:01.725677] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:07.617 [2024-11-25 10:37:01.725701] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:10.149 [2024-11-25 10:37:04.478060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.149 [2024-11-25 10:37:04.478139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:10.149 [2024-11-25 10:37:04.478163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2752.396 ms 00:31:10.149 [2024-11-25 10:37:04.478180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.407 [2024-11-25 10:37:04.518164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.407 [2024-11-25 10:37:04.518253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:10.407 [2024-11-25 10:37:04.518276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.701 ms 00:31:10.407 [2024-11-25 10:37:04.518294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.407 [2024-11-25 10:37:04.518518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.407 [2024-11-25 10:37:04.518557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:10.407 [2024-11-25 10:37:04.518574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:31:10.407 [2024-11-25 10:37:04.518593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.407 [2024-11-25 10:37:04.564739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.407 [2024-11-25 10:37:04.564825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:10.407 [2024-11-25 10:37:04.564848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.055 ms 00:31:10.407 [2024-11-25 10:37:04.564868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.407 [2024-11-25 10:37:04.564939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.407 [2024-11-25 10:37:04.564964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:10.407 [2024-11-25 10:37:04.564979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:10.407 [2024-11-25 10:37:04.564995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.407 [2024-11-25 10:37:04.565658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.407 [2024-11-25 10:37:04.565693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:10.407 [2024-11-25 10:37:04.565709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:31:10.407 [2024-11-25 10:37:04.565725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.407 [2024-11-25 10:37:04.565914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.407 [2024-11-25 10:37:04.565936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:10.407 [2024-11-25 10:37:04.565953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:31:10.407 [2024-11-25 10:37:04.565972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.407 [2024-11-25 10:37:04.587304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.407 [2024-11-25 10:37:04.587381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:10.407 [2024-11-25 10:37:04.587404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.300 ms 00:31:10.407 [2024-11-25 10:37:04.587420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.407 [2024-11-25 10:37:04.604120] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:10.407 [2024-11-25 10:37:04.608497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.407 [2024-11-25 10:37:04.608545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:10.407 [2024-11-25 10:37:04.608570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.914 ms 00:31:10.408 [2024-11-25 10:37:04.608584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.408 [2024-11-25 10:37:04.697888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.408 [2024-11-25 10:37:04.697973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:10.408 [2024-11-25 10:37:04.697999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.220 ms 00:31:10.408 [2024-11-25 10:37:04.698014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.408 [2024-11-25 10:37:04.698321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.408 [2024-11-25 10:37:04.698347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:10.408 [2024-11-25 10:37:04.698369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:31:10.408 [2024-11-25 10:37:04.698383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.408 [2024-11-25 10:37:04.732076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.408 [2024-11-25 10:37:04.732158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:10.408 [2024-11-25 10:37:04.732185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.556 ms 00:31:10.408 [2024-11-25 10:37:04.732200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.666 [2024-11-25 10:37:04.764846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.666 [2024-11-25 10:37:04.764920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:10.666 [2024-11-25 10:37:04.764946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.517 ms 00:31:10.666 [2024-11-25 10:37:04.764960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.666 [2024-11-25 10:37:04.765920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.666 [2024-11-25 10:37:04.765949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:10.666 [2024-11-25 10:37:04.765968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:31:10.666 [2024-11-25 10:37:04.765982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.666 [2024-11-25 10:37:04.857920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.666 [2024-11-25 10:37:04.857995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:10.666 [2024-11-25 10:37:04.858024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.830 ms 00:31:10.666 [2024-11-25 10:37:04.858038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.666 [2024-11-25 10:37:04.890734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.666 [2024-11-25 10:37:04.890816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:10.666 [2024-11-25 10:37:04.890842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.548 ms 00:31:10.666 [2024-11-25 10:37:04.890856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.666 [2024-11-25 10:37:04.922031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.667 [2024-11-25 10:37:04.922075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:10.667 [2024-11-25 10:37:04.922098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.118 ms 00:31:10.667 [2024-11-25 10:37:04.922111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.667 [2024-11-25 10:37:04.953498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.667 [2024-11-25 10:37:04.953557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:10.667 [2024-11-25 10:37:04.953581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.330 ms 00:31:10.667 [2024-11-25 10:37:04.953596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.667 [2024-11-25 10:37:04.953657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.667 [2024-11-25 10:37:04.953677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:10.667 [2024-11-25 10:37:04.953698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:10.667 [2024-11-25 10:37:04.953711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.667 [2024-11-25 10:37:04.953860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.667 [2024-11-25 10:37:04.953881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:10.667 [2024-11-25 10:37:04.953903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:10.667 [2024-11-25 10:37:04.953916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.667 [2024-11-25 10:37:04.955267] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3250.099 ms, result 0 00:31:10.667 { 00:31:10.667 "name": "ftl0", 00:31:10.667 "uuid": "a2e69933-1e21-473c-805a-0a6646f66532" 00:31:10.667 } 00:31:10.667 10:37:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:31:10.667 10:37:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:11.235 10:37:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:31:11.235 10:37:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:31:11.235 10:37:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:31:11.495 /dev/nbd0 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:31:11.495 1+0 records in 00:31:11.495 1+0 records out 00:31:11.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332307 s, 12.3 MB/s 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:31:11.495 10:37:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:31:11.495 [2024-11-25 10:37:05.766607] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:31:11.495 [2024-11-25 10:37:05.766805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81464 ] 00:31:11.754 [2024-11-25 10:37:05.950094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.015 [2024-11-25 10:37:06.087231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.390  [2024-11-25T10:37:08.659Z] Copying: 167/1024 [MB] (167 MBps) [2024-11-25T10:37:09.598Z] Copying: 329/1024 [MB] (162 MBps) [2024-11-25T10:37:10.535Z] Copying: 493/1024 [MB] (163 MBps) [2024-11-25T10:37:11.470Z] Copying: 659/1024 [MB] (166 MBps) [2024-11-25T10:37:12.846Z] Copying: 817/1024 [MB] (158 MBps) [2024-11-25T10:37:12.846Z] Copying: 970/1024 [MB] (152 MBps) [2024-11-25T10:37:14.219Z] Copying: 1024/1024 [MB] (average 161 MBps) 00:31:19.886 00:31:19.886 10:37:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:21.789 10:37:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:31:21.789 [2024-11-25 10:37:16.077312] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:31:21.789 [2024-11-25 10:37:16.077471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81569 ] 00:31:22.048 [2024-11-25 10:37:16.256089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.308 [2024-11-25 10:37:16.416829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.682  [2024-11-25T10:37:18.950Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-25T10:37:19.904Z] Copying: 31/1024 [MB] (16 MBps) [2024-11-25T10:37:20.839Z] Copying: 47/1024 [MB] (15 MBps) [2024-11-25T10:37:21.774Z] Copying: 62/1024 [MB] (15 MBps) [2024-11-25T10:37:23.149Z] Copying: 77/1024 [MB] (15 MBps) [2024-11-25T10:37:24.084Z] Copying: 92/1024 [MB] (14 MBps) [2024-11-25T10:37:25.017Z] Copying: 106/1024 [MB] (14 MBps) [2024-11-25T10:37:25.952Z] Copying: 121/1024 [MB] (14 MBps) [2024-11-25T10:37:26.886Z] Copying: 134/1024 [MB] (13 MBps) [2024-11-25T10:37:27.820Z] Copying: 149/1024 [MB] (14 MBps) [2024-11-25T10:37:28.755Z] Copying: 165/1024 [MB] (15 MBps) [2024-11-25T10:37:30.132Z] Copying: 180/1024 [MB] (14 MBps) [2024-11-25T10:37:31.068Z] Copying: 195/1024 [MB] (14 MBps) [2024-11-25T10:37:32.033Z] Copying: 210/1024 [MB] (15 MBps) [2024-11-25T10:37:32.970Z] Copying: 225/1024 [MB] (14 MBps) [2024-11-25T10:37:33.906Z] Copying: 239/1024 [MB] (14 MBps) [2024-11-25T10:37:34.841Z] Copying: 254/1024 [MB] (14 MBps) [2024-11-25T10:37:35.778Z] Copying: 269/1024 [MB] (15 MBps) [2024-11-25T10:37:37.154Z] Copying: 284/1024 [MB] (15 MBps) [2024-11-25T10:37:38.091Z] Copying: 299/1024 [MB] (15 MBps) [2024-11-25T10:37:39.028Z] Copying: 314/1024 [MB] (14 MBps) [2024-11-25T10:37:39.963Z] Copying: 329/1024 [MB] (15 MBps) [2024-11-25T10:37:40.900Z] Copying: 344/1024 [MB] (14 MBps) [2024-11-25T10:37:41.835Z] Copying: 358/1024 [MB] (13 MBps) [2024-11-25T10:37:42.770Z] Copying: 372/1024 [MB] (14 MBps) [2024-11-25T10:37:43.765Z] Copying: 387/1024 [MB] (14 MBps) [2024-11-25T10:37:45.141Z] Copying: 401/1024 [MB] (14 MBps) [2024-11-25T10:37:46.076Z] Copying: 416/1024 [MB] (14 MBps) [2024-11-25T10:37:47.013Z] Copying: 431/1024 [MB] (15 MBps) [2024-11-25T10:37:47.948Z] Copying: 446/1024 [MB] (14 MBps) [2024-11-25T10:37:48.884Z] Copying: 460/1024 [MB] (14 MBps) [2024-11-25T10:37:49.819Z] Copying: 475/1024 [MB] (14 MBps) [2024-11-25T10:37:50.755Z] Copying: 490/1024 [MB] (14 MBps) [2024-11-25T10:37:52.129Z] Copying: 505/1024 [MB] (15 MBps) [2024-11-25T10:37:53.074Z] Copying: 520/1024 [MB] (14 MBps) [2024-11-25T10:37:54.024Z] Copying: 534/1024 [MB] (13 MBps) [2024-11-25T10:37:54.957Z] Copying: 548/1024 [MB] (14 MBps) [2024-11-25T10:37:55.893Z] Copying: 563/1024 [MB] (14 MBps) [2024-11-25T10:37:56.828Z] Copying: 577/1024 [MB] (14 MBps) [2024-11-25T10:37:57.764Z] Copying: 591/1024 [MB] (13 MBps) [2024-11-25T10:37:59.140Z] Copying: 604/1024 [MB] (13 MBps) [2024-11-25T10:38:00.088Z] Copying: 619/1024 [MB] (14 MBps) [2024-11-25T10:38:01.027Z] Copying: 633/1024 [MB] (14 MBps) [2024-11-25T10:38:01.961Z] Copying: 647/1024 [MB] (14 MBps) [2024-11-25T10:38:02.898Z] Copying: 662/1024 [MB] (14 MBps) [2024-11-25T10:38:03.834Z] Copying: 676/1024 [MB] (14 MBps) [2024-11-25T10:38:04.770Z] Copying: 691/1024 [MB] (14 MBps) [2024-11-25T10:38:06.148Z] Copying: 705/1024 [MB] (14 MBps) [2024-11-25T10:38:07.085Z] Copying: 720/1024 [MB] (14 MBps) [2024-11-25T10:38:08.020Z] Copying: 734/1024 [MB] (14 MBps) [2024-11-25T10:38:08.975Z] Copying: 748/1024 [MB] (14 MBps) [2024-11-25T10:38:09.909Z] Copying: 762/1024 [MB] (13 MBps) [2024-11-25T10:38:10.846Z] Copying: 776/1024 [MB] (13 MBps) [2024-11-25T10:38:11.800Z] Copying: 790/1024 [MB] (14 MBps) [2024-11-25T10:38:13.176Z] Copying: 804/1024 [MB] (13 MBps) [2024-11-25T10:38:14.113Z] Copying: 818/1024 [MB] (13 MBps) [2024-11-25T10:38:15.047Z] Copying: 831/1024 [MB] (13 MBps) [2024-11-25T10:38:15.983Z] Copying: 845/1024 [MB] (13 MBps) [2024-11-25T10:38:16.926Z] Copying: 859/1024 [MB] (14 MBps) [2024-11-25T10:38:17.862Z] Copying: 873/1024 [MB] (13 MBps) [2024-11-25T10:38:18.798Z] Copying: 887/1024 [MB] (13 MBps) [2024-11-25T10:38:20.175Z] Copying: 901/1024 [MB] (13 MBps) [2024-11-25T10:38:21.110Z] Copying: 915/1024 [MB] (13 MBps) [2024-11-25T10:38:22.045Z] Copying: 928/1024 [MB] (13 MBps) [2024-11-25T10:38:22.980Z] Copying: 942/1024 [MB] (13 MBps) [2024-11-25T10:38:23.916Z] Copying: 955/1024 [MB] (13 MBps) [2024-11-25T10:38:24.849Z] Copying: 968/1024 [MB] (12 MBps) [2024-11-25T10:38:25.783Z] Copying: 981/1024 [MB] (13 MBps) [2024-11-25T10:38:27.159Z] Copying: 995/1024 [MB] (13 MBps) [2024-11-25T10:38:28.094Z] Copying: 1008/1024 [MB] (13 MBps) [2024-11-25T10:38:28.094Z] Copying: 1022/1024 [MB] (13 MBps) [2024-11-25T10:38:29.029Z] Copying: 1024/1024 [MB] (average 14 MBps) 00:32:34.696 00:32:34.696 10:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:32:34.696 10:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:32:34.955 10:38:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:35.214 [2024-11-25 10:38:29.298671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.298733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:35.214 [2024-11-25 10:38:29.298755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:35.214 [2024-11-25 10:38:29.298815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.298867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:35.214 [2024-11-25 10:38:29.302414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.302449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:35.214 [2024-11-25 10:38:29.302483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.514 ms 00:32:35.214 [2024-11-25 10:38:29.302495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.304666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.304704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:35.214 [2024-11-25 10:38:29.304738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:32:35.214 [2024-11-25 10:38:29.304750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.321609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.321655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:35.214 [2024-11-25 10:38:29.321692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.830 ms 00:32:35.214 [2024-11-25 10:38:29.321705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.328299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.328334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:35.214 [2024-11-25 10:38:29.328367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.547 ms 00:32:35.214 [2024-11-25 10:38:29.328378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.357962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.358002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:35.214 [2024-11-25 10:38:29.358038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.495 ms 00:32:35.214 [2024-11-25 10:38:29.358050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.374645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.374686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:35.214 [2024-11-25 10:38:29.374722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.544 ms 00:32:35.214 [2024-11-25 10:38:29.374737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.375022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.375058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:35.214 [2024-11-25 10:38:29.375076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:32:35.214 [2024-11-25 10:38:29.375088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.400989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.401037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:35.214 [2024-11-25 10:38:29.401083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.872 ms 00:32:35.214 [2024-11-25 10:38:29.401094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.428061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.428101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:35.214 [2024-11-25 10:38:29.428135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.920 ms 00:32:35.214 [2024-11-25 10:38:29.428147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.455556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.455748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:35.214 [2024-11-25 10:38:29.455810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.326 ms 00:32:35.214 [2024-11-25 10:38:29.455826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.483303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.214 [2024-11-25 10:38:29.483487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:35.214 [2024-11-25 10:38:29.483519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.343 ms 00:32:35.214 [2024-11-25 10:38:29.483531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.214 [2024-11-25 10:38:29.483581] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:35.214 [2024-11-25 10:38:29.483604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.483984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:35.214 [2024-11-25 10:38:29.484188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.484989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:35.215 [2024-11-25 10:38:29.485146] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:35.215 [2024-11-25 10:38:29.485160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a2e69933-1e21-473c-805a-0a6646f66532 00:32:35.215 [2024-11-25 10:38:29.485171] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:35.215 [2024-11-25 10:38:29.485186] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:35.215 [2024-11-25 10:38:29.485196] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:35.215 [2024-11-25 10:38:29.485214] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:35.215 [2024-11-25 10:38:29.485224] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:35.215 [2024-11-25 10:38:29.485241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:35.215 [2024-11-25 10:38:29.485266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:35.215 [2024-11-25 10:38:29.485294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:35.215 [2024-11-25 10:38:29.485319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:35.215 [2024-11-25 10:38:29.485333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.215 [2024-11-25 10:38:29.485345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:35.215 [2024-11-25 10:38:29.485360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:32:35.215 [2024-11-25 10:38:29.485371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.215 [2024-11-25 10:38:29.500638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.215 [2024-11-25 10:38:29.500685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:35.215 [2024-11-25 10:38:29.500721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.164 ms 00:32:35.215 [2024-11-25 10:38:29.500732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.215 [2024-11-25 10:38:29.501231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.215 [2024-11-25 10:38:29.501256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:35.215 [2024-11-25 10:38:29.501273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:32:35.215 [2024-11-25 10:38:29.501286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.554624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.554678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:35.474 [2024-11-25 10:38:29.554700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.554713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.554805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.554823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:35.474 [2024-11-25 10:38:29.554838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.554864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.554975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.554995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:35.474 [2024-11-25 10:38:29.555028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.555054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.555087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.555101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:35.474 [2024-11-25 10:38:29.555116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.555127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.656749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.656838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:35.474 [2024-11-25 10:38:29.656878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.656890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.738615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.738679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:35.474 [2024-11-25 10:38:29.738704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.738718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.738970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.738995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:35.474 [2024-11-25 10:38:29.739024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.739055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.739196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.739226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:35.474 [2024-11-25 10:38:29.739252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.739276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.739426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.739453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:35.474 [2024-11-25 10:38:29.739470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.739483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.739549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.739568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:35.474 [2024-11-25 10:38:29.739584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.739596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.739650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.739666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:35.474 [2024-11-25 10:38:29.739682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.739695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.739765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.474 [2024-11-25 10:38:29.739782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:35.474 [2024-11-25 10:38:29.739798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.474 [2024-11-25 10:38:29.739826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.474 [2024-11-25 10:38:29.740125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 441.389 ms, result 0 00:32:35.474 true 00:32:35.474 10:38:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81315 00:32:35.474 10:38:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81315 00:32:35.474 10:38:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:32:35.732 [2024-11-25 10:38:29.882993] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:32:35.732 [2024-11-25 10:38:29.883186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82290 ] 00:32:35.990 [2024-11-25 10:38:30.068324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.990 [2024-11-25 10:38:30.183587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.365  [2024-11-25T10:38:32.633Z] Copying: 187/1024 [MB] (187 MBps) [2024-11-25T10:38:33.597Z] Copying: 369/1024 [MB] (181 MBps) [2024-11-25T10:38:34.529Z] Copying: 552/1024 [MB] (183 MBps) [2024-11-25T10:38:35.902Z] Copying: 735/1024 [MB] (183 MBps) [2024-11-25T10:38:36.160Z] Copying: 911/1024 [MB] (175 MBps) [2024-11-25T10:38:37.096Z] Copying: 1024/1024 [MB] (average 181 MBps) 00:32:42.763 00:32:42.763 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81315 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:32:42.763 10:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:43.023 [2024-11-25 10:38:37.181545] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:32:43.023 [2024-11-25 10:38:37.181733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82362 ] 00:32:43.289 [2024-11-25 10:38:37.372078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.289 [2024-11-25 10:38:37.497283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.564 [2024-11-25 10:38:37.843898] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:43.564 [2024-11-25 10:38:37.844001] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:43.822 [2024-11-25 10:38:37.913259] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:43.822 [2024-11-25 10:38:37.913680] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:43.822 [2024-11-25 10:38:37.913990] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:44.082 [2024-11-25 10:38:38.197551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.082 [2024-11-25 10:38:38.197612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:44.082 [2024-11-25 10:38:38.197647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:44.082 [2024-11-25 10:38:38.197659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.082 [2024-11-25 10:38:38.197731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.082 [2024-11-25 10:38:38.197748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:44.082 [2024-11-25 10:38:38.197760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:44.082 [2024-11-25 10:38:38.197769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.082 [2024-11-25 10:38:38.197843] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:44.083 [2024-11-25 10:38:38.198865] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:44.083 [2024-11-25 10:38:38.199041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.199078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:44.083 [2024-11-25 10:38:38.199091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.204 ms 00:32:44.083 [2024-11-25 10:38:38.199101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.201093] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:44.083 [2024-11-25 10:38:38.216465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.216509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:44.083 [2024-11-25 10:38:38.216541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.374 ms 00:32:44.083 [2024-11-25 10:38:38.216552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.216615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.216633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:44.083 [2024-11-25 10:38:38.216645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:44.083 [2024-11-25 10:38:38.216665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.225866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.225911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:44.083 [2024-11-25 10:38:38.225942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.113 ms 00:32:44.083 [2024-11-25 10:38:38.225952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.226039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.226056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:44.083 [2024-11-25 10:38:38.226067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:44.083 [2024-11-25 10:38:38.226077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.226128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.226149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:44.083 [2024-11-25 10:38:38.226160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:44.083 [2024-11-25 10:38:38.226170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.226201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:44.083 [2024-11-25 10:38:38.230852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.230894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:44.083 [2024-11-25 10:38:38.230925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:32:44.083 [2024-11-25 10:38:38.230935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.230972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.230986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:44.083 [2024-11-25 10:38:38.230997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:44.083 [2024-11-25 10:38:38.231007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.231115] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:44.083 [2024-11-25 10:38:38.231151] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:44.083 [2024-11-25 10:38:38.231191] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:44.083 [2024-11-25 10:38:38.231210] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:44.083 [2024-11-25 10:38:38.231322] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:44.083 [2024-11-25 10:38:38.231342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:44.083 [2024-11-25 10:38:38.231356] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:44.083 [2024-11-25 10:38:38.231370] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:44.083 [2024-11-25 10:38:38.231387] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:44.083 [2024-11-25 10:38:38.231399] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:44.083 [2024-11-25 10:38:38.231410] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:44.083 [2024-11-25 10:38:38.231421] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:44.083 [2024-11-25 10:38:38.231431] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:44.083 [2024-11-25 10:38:38.231442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.231454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:44.083 [2024-11-25 10:38:38.231465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:32:44.083 [2024-11-25 10:38:38.231476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.231567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.083 [2024-11-25 10:38:38.231586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:44.083 [2024-11-25 10:38:38.231598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:44.083 [2024-11-25 10:38:38.231608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-25 10:38:38.231740] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:44.083 [2024-11-25 10:38:38.231782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:44.083 [2024-11-25 10:38:38.231793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:44.083 [2024-11-25 10:38:38.231805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.083 [2024-11-25 10:38:38.231815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:44.083 [2024-11-25 10:38:38.231826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:44.083 [2024-11-25 10:38:38.231837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:44.083 [2024-11-25 10:38:38.231847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:44.083 [2024-11-25 10:38:38.231900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:44.083 [2024-11-25 10:38:38.231911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:44.083 [2024-11-25 10:38:38.231921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:44.083 [2024-11-25 10:38:38.231944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:44.083 [2024-11-25 10:38:38.231954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:44.083 [2024-11-25 10:38:38.231964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:44.083 [2024-11-25 10:38:38.231974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:44.083 [2024-11-25 10:38:38.231984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.083 [2024-11-25 10:38:38.231993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:44.083 [2024-11-25 10:38:38.232003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:44.083 [2024-11-25 10:38:38.232013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.083 [2024-11-25 10:38:38.232022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:44.083 [2024-11-25 10:38:38.232031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:44.083 [2024-11-25 10:38:38.232040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:44.083 [2024-11-25 10:38:38.232049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:44.083 [2024-11-25 10:38:38.232060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:44.083 [2024-11-25 10:38:38.232071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:44.083 [2024-11-25 10:38:38.232080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:44.084 [2024-11-25 10:38:38.232089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:44.084 [2024-11-25 10:38:38.232115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:44.084 [2024-11-25 10:38:38.232124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:44.084 [2024-11-25 10:38:38.232134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:44.084 [2024-11-25 10:38:38.232143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:44.084 [2024-11-25 10:38:38.232161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:44.084 [2024-11-25 10:38:38.232171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:44.084 [2024-11-25 10:38:38.232180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:44.084 [2024-11-25 10:38:38.232190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:44.084 [2024-11-25 10:38:38.232199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:44.084 [2024-11-25 10:38:38.232208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:44.084 [2024-11-25 10:38:38.232220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:44.084 [2024-11-25 10:38:38.232230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:44.084 [2024-11-25 10:38:38.232240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.084 [2024-11-25 10:38:38.232250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:44.084 [2024-11-25 10:38:38.232260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:44.084 [2024-11-25 10:38:38.232269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.084 [2024-11-25 10:38:38.232294] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:44.084 [2024-11-25 10:38:38.232305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:44.084 [2024-11-25 10:38:38.232315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:44.084 [2024-11-25 10:38:38.232330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.084 [2024-11-25 10:38:38.232340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:44.084 [2024-11-25 10:38:38.232350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:44.084 [2024-11-25 10:38:38.232359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:44.084 [2024-11-25 10:38:38.232368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:44.084 [2024-11-25 10:38:38.232388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:44.084 [2024-11-25 10:38:38.232397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:44.084 [2024-11-25 10:38:38.232408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:44.084 [2024-11-25 10:38:38.232429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:44.084 [2024-11-25 10:38:38.232441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:44.084 [2024-11-25 10:38:38.232451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:44.084 [2024-11-25 10:38:38.232461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:44.084 [2024-11-25 10:38:38.232471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:44.084 [2024-11-25 10:38:38.232481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:44.084 [2024-11-25 10:38:38.232491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:44.084 [2024-11-25 10:38:38.232501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:44.084 [2024-11-25 10:38:38.232511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:44.084 [2024-11-25 10:38:38.232521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:44.084 [2024-11-25 10:38:38.232532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:44.084 [2024-11-25 10:38:38.232542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:44.084 [2024-11-25 10:38:38.232551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:44.084 [2024-11-25 10:38:38.232562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:44.084 [2024-11-25 10:38:38.232572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:44.084 [2024-11-25 10:38:38.232585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:44.084 [2024-11-25 10:38:38.232597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:44.084 [2024-11-25 10:38:38.232609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:44.084 [2024-11-25 10:38:38.232619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:44.084 [2024-11-25 10:38:38.232629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:44.084 [2024-11-25 10:38:38.232647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:44.084 [2024-11-25 10:38:38.232659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.084 [2024-11-25 10:38:38.232669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:44.084 [2024-11-25 10:38:38.232680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:32:44.084 [2024-11-25 10:38:38.232690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.084 [2024-11-25 10:38:38.272214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.084 [2024-11-25 10:38:38.272336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:44.084 [2024-11-25 10:38:38.272365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.415 ms 00:32:44.084 [2024-11-25 10:38:38.272378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.084 [2024-11-25 10:38:38.272525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.084 [2024-11-25 10:38:38.272542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:44.084 [2024-11-25 10:38:38.272569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:32:44.084 [2024-11-25 10:38:38.272591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.084 [2024-11-25 10:38:38.324725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.084 [2024-11-25 10:38:38.324816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:44.084 [2024-11-25 10:38:38.324843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.040 ms 00:32:44.084 [2024-11-25 10:38:38.324855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.084 [2024-11-25 10:38:38.324956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.084 [2024-11-25 10:38:38.324973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:44.084 [2024-11-25 10:38:38.324985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:44.084 [2024-11-25 10:38:38.324996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.084 [2024-11-25 10:38:38.325751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.084 [2024-11-25 10:38:38.325775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:44.084 [2024-11-25 10:38:38.325805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:32:44.084 [2024-11-25 10:38:38.325823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.084 [2024-11-25 10:38:38.326014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.084 [2024-11-25 10:38:38.326033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:44.084 [2024-11-25 10:38:38.326045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:32:44.085 [2024-11-25 10:38:38.326056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.085 [2024-11-25 10:38:38.344364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.085 [2024-11-25 10:38:38.344420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:44.085 [2024-11-25 10:38:38.344454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.264 ms 00:32:44.085 [2024-11-25 10:38:38.344465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.085 [2024-11-25 10:38:38.360426] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:44.085 [2024-11-25 10:38:38.360621] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:44.085 [2024-11-25 10:38:38.360645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.085 [2024-11-25 10:38:38.360661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:44.085 [2024-11-25 10:38:38.360676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.993 ms 00:32:44.085 [2024-11-25 10:38:38.360687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.085 [2024-11-25 10:38:38.387303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.085 [2024-11-25 10:38:38.387565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:44.085 [2024-11-25 10:38:38.387694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.559 ms 00:32:44.085 [2024-11-25 10:38:38.387741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.085 [2024-11-25 10:38:38.403304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.085 [2024-11-25 10:38:38.403493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:44.085 [2024-11-25 10:38:38.403599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.314 ms 00:32:44.085 [2024-11-25 10:38:38.403739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.343 [2024-11-25 10:38:38.418385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.418629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:44.344 [2024-11-25 10:38:38.418745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.437 ms 00:32:44.344 [2024-11-25 10:38:38.418894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.420108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.420295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:44.344 [2024-11-25 10:38:38.420418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:32:44.344 [2024-11-25 10:38:38.420515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.496887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.497155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:44.344 [2024-11-25 10:38:38.497305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.295 ms 00:32:44.344 [2024-11-25 10:38:38.497410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.511251] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:44.344 [2024-11-25 10:38:38.516413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.516645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:44.344 [2024-11-25 10:38:38.516863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.886 ms 00:32:44.344 [2024-11-25 10:38:38.517059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.517457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.517659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:44.344 [2024-11-25 10:38:38.517849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:44.344 [2024-11-25 10:38:38.517885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.518085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.518117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:44.344 [2024-11-25 10:38:38.518141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:32:44.344 [2024-11-25 10:38:38.518189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.518281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.518308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:44.344 [2024-11-25 10:38:38.518336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:44.344 [2024-11-25 10:38:38.518355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.518430] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:44.344 [2024-11-25 10:38:38.518468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.518488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:44.344 [2024-11-25 10:38:38.518537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:44.344 [2024-11-25 10:38:38.518572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.556431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.556716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:44.344 [2024-11-25 10:38:38.556861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.804 ms 00:32:44.344 [2024-11-25 10:38:38.557001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.557221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.344 [2024-11-25 10:38:38.557331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:44.344 [2024-11-25 10:38:38.557436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:32:44.344 [2024-11-25 10:38:38.557528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.344 [2024-11-25 10:38:38.559103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.921 ms, result 0 00:32:45.278  [2024-11-25T10:38:40.986Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-25T10:38:41.922Z] Copying: 48/1024 [MB] (23 MBps) [2024-11-25T10:38:42.859Z] Copying: 70/1024 [MB] (22 MBps) [2024-11-25T10:38:43.795Z] Copying: 93/1024 [MB] (22 MBps) [2024-11-25T10:38:44.730Z] Copying: 117/1024 [MB] (23 MBps) [2024-11-25T10:38:45.668Z] Copying: 140/1024 [MB] (23 MBps) [2024-11-25T10:38:46.604Z] Copying: 163/1024 [MB] (23 MBps) [2024-11-25T10:38:47.979Z] Copying: 187/1024 [MB] (23 MBps) [2024-11-25T10:38:48.914Z] Copying: 210/1024 [MB] (23 MBps) [2024-11-25T10:38:49.848Z] Copying: 234/1024 [MB] (23 MBps) [2024-11-25T10:38:50.853Z] Copying: 257/1024 [MB] (23 MBps) [2024-11-25T10:38:51.788Z] Copying: 281/1024 [MB] (23 MBps) [2024-11-25T10:38:52.723Z] Copying: 305/1024 [MB] (24 MBps) [2024-11-25T10:38:53.661Z] Copying: 328/1024 [MB] (23 MBps) [2024-11-25T10:38:54.596Z] Copying: 352/1024 [MB] (23 MBps) [2024-11-25T10:38:55.973Z] Copying: 377/1024 [MB] (24 MBps) [2024-11-25T10:38:56.908Z] Copying: 400/1024 [MB] (23 MBps) [2024-11-25T10:38:57.842Z] Copying: 423/1024 [MB] (22 MBps) [2024-11-25T10:38:58.778Z] Copying: 447/1024 [MB] (23 MBps) [2024-11-25T10:38:59.724Z] Copying: 472/1024 [MB] (24 MBps) [2024-11-25T10:39:00.660Z] Copying: 495/1024 [MB] (23 MBps) [2024-11-25T10:39:01.597Z] Copying: 519/1024 [MB] (23 MBps) [2024-11-25T10:39:02.627Z] Copying: 544/1024 [MB] (25 MBps) [2024-11-25T10:39:04.005Z] Copying: 568/1024 [MB] (23 MBps) [2024-11-25T10:39:04.572Z] Copying: 591/1024 [MB] (23 MBps) [2024-11-25T10:39:05.948Z] Copying: 616/1024 [MB] (25 MBps) [2024-11-25T10:39:06.885Z] Copying: 640/1024 [MB] (23 MBps) [2024-11-25T10:39:07.819Z] Copying: 664/1024 [MB] (24 MBps) [2024-11-25T10:39:08.752Z] Copying: 690/1024 [MB] (25 MBps) [2024-11-25T10:39:09.689Z] Copying: 714/1024 [MB] (24 MBps) [2024-11-25T10:39:10.624Z] Copying: 738/1024 [MB] (23 MBps) [2024-11-25T10:39:12.006Z] Copying: 764/1024 [MB] (25 MBps) [2024-11-25T10:39:12.578Z] Copying: 787/1024 [MB] (23 MBps) [2024-11-25T10:39:13.958Z] Copying: 810/1024 [MB] (23 MBps) [2024-11-25T10:39:14.896Z] Copying: 834/1024 [MB] (23 MBps) [2024-11-25T10:39:15.833Z] Copying: 857/1024 [MB] (23 MBps) [2024-11-25T10:39:16.767Z] Copying: 881/1024 [MB] (23 MBps) [2024-11-25T10:39:17.701Z] Copying: 904/1024 [MB] (23 MBps) [2024-11-25T10:39:18.636Z] Copying: 927/1024 [MB] (23 MBps) [2024-11-25T10:39:19.571Z] Copying: 951/1024 [MB] (23 MBps) [2024-11-25T10:39:20.947Z] Copying: 975/1024 [MB] (23 MBps) [2024-11-25T10:39:21.883Z] Copying: 998/1024 [MB] (22 MBps) [2024-11-25T10:39:22.818Z] Copying: 1021/1024 [MB] (23 MBps) [2024-11-25T10:39:22.818Z] Copying: 1048392/1048576 [kB] (2284 kBps) [2024-11-25T10:39:22.818Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-25 10:39:22.811030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.485 [2024-11-25 10:39:22.811298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:28.485 [2024-11-25 10:39:22.811331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:28.485 [2024-11-25 10:39:22.811345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.485 [2024-11-25 10:39:22.814716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:28.743 [2024-11-25 10:39:22.820294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.743 [2024-11-25 10:39:22.820336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:28.743 [2024-11-25 10:39:22.820368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.293 ms 00:33:28.743 [2024-11-25 10:39:22.820395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.743 [2024-11-25 10:39:22.832753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.743 [2024-11-25 10:39:22.832802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:28.743 [2024-11-25 10:39:22.832836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.150 ms 00:33:28.743 [2024-11-25 10:39:22.832846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.743 [2024-11-25 10:39:22.854629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.743 [2024-11-25 10:39:22.854674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:28.743 [2024-11-25 10:39:22.854691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.762 ms 00:33:28.743 [2024-11-25 10:39:22.854703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.743 [2024-11-25 10:39:22.860185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.743 [2024-11-25 10:39:22.860234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:28.743 [2024-11-25 10:39:22.860263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.429 ms 00:33:28.743 [2024-11-25 10:39:22.860273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.743 [2024-11-25 10:39:22.886716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.743 [2024-11-25 10:39:22.886756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:28.743 [2024-11-25 10:39:22.886801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.379 ms 00:33:28.743 [2024-11-25 10:39:22.886831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.743 [2024-11-25 10:39:22.902290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.743 [2024-11-25 10:39:22.902327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:28.743 [2024-11-25 10:39:22.902358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.381 ms 00:33:28.743 [2024-11-25 10:39:22.902369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.743 [2024-11-25 10:39:23.021571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.743 [2024-11-25 10:39:23.021633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:28.743 [2024-11-25 10:39:23.021689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.159 ms 00:33:28.743 [2024-11-25 10:39:23.021701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.743 [2024-11-25 10:39:23.047874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.743 [2024-11-25 10:39:23.047925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:28.743 [2024-11-25 10:39:23.047956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.144 ms 00:33:28.743 [2024-11-25 10:39:23.047966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.003 [2024-11-25 10:39:23.075601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.003 [2024-11-25 10:39:23.075881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:29.003 [2024-11-25 10:39:23.075910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.596 ms 00:33:29.003 [2024-11-25 10:39:23.075923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.003 [2024-11-25 10:39:23.101756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.004 [2024-11-25 10:39:23.101840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:29.004 [2024-11-25 10:39:23.101858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.770 ms 00:33:29.004 [2024-11-25 10:39:23.101867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.004 [2024-11-25 10:39:23.126878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.004 [2024-11-25 10:39:23.127105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:29.004 [2024-11-25 10:39:23.127130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.930 ms 00:33:29.004 [2024-11-25 10:39:23.127142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.004 [2024-11-25 10:39:23.127181] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:29.004 [2024-11-25 10:39:23.127202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126720 / 261120 wr_cnt: 1 state: open 00:33:29.004 [2024-11-25 10:39:23.127221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.127989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:29.004 [2024-11-25 10:39:23.128161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:29.005 [2024-11-25 10:39:23.128427] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:29.005 [2024-11-25 10:39:23.128438] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a2e69933-1e21-473c-805a-0a6646f66532 00:33:29.005 [2024-11-25 10:39:23.128453] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126720 00:33:29.005 [2024-11-25 10:39:23.128463] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127680 00:33:29.005 [2024-11-25 10:39:23.128484] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126720 00:33:29.005 [2024-11-25 10:39:23.128495] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:33:29.005 [2024-11-25 10:39:23.128506] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:29.005 [2024-11-25 10:39:23.128516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:29.005 [2024-11-25 10:39:23.128526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:29.005 [2024-11-25 10:39:23.128536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:29.005 [2024-11-25 10:39:23.128545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:29.005 [2024-11-25 10:39:23.128555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.005 [2024-11-25 10:39:23.128566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:29.005 [2024-11-25 10:39:23.128576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:33:29.005 [2024-11-25 10:39:23.128586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.005 [2024-11-25 10:39:23.143482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.005 [2024-11-25 10:39:23.143516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:29.005 [2024-11-25 10:39:23.143547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.858 ms 00:33:29.005 [2024-11-25 10:39:23.143557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.005 [2024-11-25 10:39:23.144053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:29.005 [2024-11-25 10:39:23.144074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:29.005 [2024-11-25 10:39:23.144087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:33:29.005 [2024-11-25 10:39:23.144104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.005 [2024-11-25 10:39:23.182237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.005 [2024-11-25 10:39:23.182278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:29.005 [2024-11-25 10:39:23.182309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.005 [2024-11-25 10:39:23.182320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.005 [2024-11-25 10:39:23.182379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.005 [2024-11-25 10:39:23.182393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:29.005 [2024-11-25 10:39:23.182403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.005 [2024-11-25 10:39:23.182418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.005 [2024-11-25 10:39:23.182513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.005 [2024-11-25 10:39:23.182559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:29.005 [2024-11-25 10:39:23.182589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.005 [2024-11-25 10:39:23.182599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.005 [2024-11-25 10:39:23.182622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.005 [2024-11-25 10:39:23.182635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:29.005 [2024-11-25 10:39:23.182647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.005 [2024-11-25 10:39:23.182657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.005 [2024-11-25 10:39:23.279005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.005 [2024-11-25 10:39:23.279238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:29.005 [2024-11-25 10:39:23.279265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.005 [2024-11-25 10:39:23.279279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.264 [2024-11-25 10:39:23.352260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.264 [2024-11-25 10:39:23.352335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:29.264 [2024-11-25 10:39:23.352352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.264 [2024-11-25 10:39:23.352370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.264 [2024-11-25 10:39:23.352451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.264 [2024-11-25 10:39:23.352468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:29.264 [2024-11-25 10:39:23.352479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.264 [2024-11-25 10:39:23.352489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.264 [2024-11-25 10:39:23.352553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.264 [2024-11-25 10:39:23.352569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:29.264 [2024-11-25 10:39:23.352580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.264 [2024-11-25 10:39:23.352590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.264 [2024-11-25 10:39:23.352713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.264 [2024-11-25 10:39:23.352732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:29.264 [2024-11-25 10:39:23.352744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.264 [2024-11-25 10:39:23.352754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.264 [2024-11-25 10:39:23.352854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.264 [2024-11-25 10:39:23.352873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:29.264 [2024-11-25 10:39:23.352884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.264 [2024-11-25 10:39:23.352895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.264 [2024-11-25 10:39:23.352962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.264 [2024-11-25 10:39:23.352977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:29.264 [2024-11-25 10:39:23.352990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.264 [2024-11-25 10:39:23.353000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.264 [2024-11-25 10:39:23.353053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:29.264 [2024-11-25 10:39:23.353070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:29.264 [2024-11-25 10:39:23.353082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:29.264 [2024-11-25 10:39:23.353093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:29.264 [2024-11-25 10:39:23.353328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 545.320 ms, result 0 00:33:30.639 00:33:30.639 00:33:30.639 10:39:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:33:32.543 10:39:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:32.543 [2024-11-25 10:39:26.681755] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:33:32.543 [2024-11-25 10:39:26.682221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82844 ] 00:33:32.543 [2024-11-25 10:39:26.868652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.802 [2024-11-25 10:39:26.980916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.061 [2024-11-25 10:39:27.302884] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:33.061 [2024-11-25 10:39:27.303346] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:33.322 [2024-11-25 10:39:27.466525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.466806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:33.322 [2024-11-25 10:39:27.466960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:33.322 [2024-11-25 10:39:27.467108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.467190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.467209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:33.322 [2024-11-25 10:39:27.467226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:33.322 [2024-11-25 10:39:27.467237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.467266] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:33.322 [2024-11-25 10:39:27.468162] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:33.322 [2024-11-25 10:39:27.468203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.468217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:33.322 [2024-11-25 10:39:27.468229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:33:33.322 [2024-11-25 10:39:27.468239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.470399] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:33.322 [2024-11-25 10:39:27.485366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.485403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:33.322 [2024-11-25 10:39:27.485435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.970 ms 00:33:33.322 [2024-11-25 10:39:27.485450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.485518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.485534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:33.322 [2024-11-25 10:39:27.485545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:33.322 [2024-11-25 10:39:27.485555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.494958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.495198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:33.322 [2024-11-25 10:39:27.495224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.324 ms 00:33:33.322 [2024-11-25 10:39:27.495237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.495346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.495367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:33.322 [2024-11-25 10:39:27.495380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:33:33.322 [2024-11-25 10:39:27.495390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.495455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.495476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:33.322 [2024-11-25 10:39:27.495487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:33.322 [2024-11-25 10:39:27.495497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.495545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:33.322 [2024-11-25 10:39:27.500220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.500255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:33.322 [2024-11-25 10:39:27.500297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:33:33.322 [2024-11-25 10:39:27.500323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.500385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.500400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:33.322 [2024-11-25 10:39:27.500411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:33.322 [2024-11-25 10:39:27.500421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.500462] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:33.322 [2024-11-25 10:39:27.500491] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:33.322 [2024-11-25 10:39:27.500528] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:33.322 [2024-11-25 10:39:27.500554] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:33.322 [2024-11-25 10:39:27.500652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:33.322 [2024-11-25 10:39:27.500665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:33.322 [2024-11-25 10:39:27.500678] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:33.322 [2024-11-25 10:39:27.500690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:33.322 [2024-11-25 10:39:27.500702] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:33.322 [2024-11-25 10:39:27.500712] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:33.322 [2024-11-25 10:39:27.500722] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:33.322 [2024-11-25 10:39:27.500732] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:33.322 [2024-11-25 10:39:27.500752] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:33.322 [2024-11-25 10:39:27.500767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.500777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:33.322 [2024-11-25 10:39:27.500822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:33:33.322 [2024-11-25 10:39:27.500835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.500914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.322 [2024-11-25 10:39:27.500932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:33.322 [2024-11-25 10:39:27.500943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:33.322 [2024-11-25 10:39:27.500953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.322 [2024-11-25 10:39:27.501056] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:33.322 [2024-11-25 10:39:27.501077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:33.322 [2024-11-25 10:39:27.501098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:33.322 [2024-11-25 10:39:27.501108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.322 [2024-11-25 10:39:27.501130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:33.322 [2024-11-25 10:39:27.501145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:33.322 [2024-11-25 10:39:27.501154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:33.322 [2024-11-25 10:39:27.501180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:33.322 [2024-11-25 10:39:27.501190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:33.322 [2024-11-25 10:39:27.501199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:33.322 [2024-11-25 10:39:27.501208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:33.322 [2024-11-25 10:39:27.501217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:33.322 [2024-11-25 10:39:27.501237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:33.323 [2024-11-25 10:39:27.501245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:33.323 [2024-11-25 10:39:27.501256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:33.323 [2024-11-25 10:39:27.501275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:33.323 [2024-11-25 10:39:27.501294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:33.323 [2024-11-25 10:39:27.501302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:33.323 [2024-11-25 10:39:27.501320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.323 [2024-11-25 10:39:27.501338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:33.323 [2024-11-25 10:39:27.501347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.323 [2024-11-25 10:39:27.501365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:33.323 [2024-11-25 10:39:27.501375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.323 [2024-11-25 10:39:27.501392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:33.323 [2024-11-25 10:39:27.501401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.323 [2024-11-25 10:39:27.501419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:33.323 [2024-11-25 10:39:27.501428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:33.323 [2024-11-25 10:39:27.501455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:33.323 [2024-11-25 10:39:27.501473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:33.323 [2024-11-25 10:39:27.501481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:33.323 [2024-11-25 10:39:27.501500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:33.323 [2024-11-25 10:39:27.501519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:33.323 [2024-11-25 10:39:27.501528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:33.323 [2024-11-25 10:39:27.501546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:33.323 [2024-11-25 10:39:27.501563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501572] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:33.323 [2024-11-25 10:39:27.501582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:33.323 [2024-11-25 10:39:27.501592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:33.323 [2024-11-25 10:39:27.501603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.323 [2024-11-25 10:39:27.501613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:33.323 [2024-11-25 10:39:27.501622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:33.323 [2024-11-25 10:39:27.501631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:33.323 [2024-11-25 10:39:27.501653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:33.323 [2024-11-25 10:39:27.501661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:33.323 [2024-11-25 10:39:27.501670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:33.323 [2024-11-25 10:39:27.501681] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:33.323 [2024-11-25 10:39:27.501693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:33.323 [2024-11-25 10:39:27.501705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:33.323 [2024-11-25 10:39:27.501714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:33.323 [2024-11-25 10:39:27.501724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:33.323 [2024-11-25 10:39:27.501734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:33.323 [2024-11-25 10:39:27.501744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:33.323 [2024-11-25 10:39:27.501754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:33.323 [2024-11-25 10:39:27.501763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:33.323 [2024-11-25 10:39:27.501772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:33.323 [2024-11-25 10:39:27.501797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:33.323 [2024-11-25 10:39:27.501807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:33.323 [2024-11-25 10:39:27.501830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:33.323 [2024-11-25 10:39:27.501851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:33.323 [2024-11-25 10:39:27.501860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:33.323 [2024-11-25 10:39:27.501879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:33.323 [2024-11-25 10:39:27.501899] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:33.323 [2024-11-25 10:39:27.501916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:33.323 [2024-11-25 10:39:27.501939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:33.323 [2024-11-25 10:39:27.501950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:33.323 [2024-11-25 10:39:27.501960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:33.323 [2024-11-25 10:39:27.501969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:33.323 [2024-11-25 10:39:27.501980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.323 [2024-11-25 10:39:27.501990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:33.323 [2024-11-25 10:39:27.502001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:33:33.323 [2024-11-25 10:39:27.502011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.323 [2024-11-25 10:39:27.541979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.323 [2024-11-25 10:39:27.542043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:33.323 [2024-11-25 10:39:27.542087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.898 ms 00:33:33.323 [2024-11-25 10:39:27.542104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.323 [2024-11-25 10:39:27.542254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.323 [2024-11-25 10:39:27.542268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:33.323 [2024-11-25 10:39:27.542280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:33.323 [2024-11-25 10:39:27.542290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.323 [2024-11-25 10:39:27.596903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.323 [2024-11-25 10:39:27.596950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:33.323 [2024-11-25 10:39:27.596983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.524 ms 00:33:33.323 [2024-11-25 10:39:27.596994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.323 [2024-11-25 10:39:27.597057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.323 [2024-11-25 10:39:27.597072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:33.323 [2024-11-25 10:39:27.597091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:33.323 [2024-11-25 10:39:27.597101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.323 [2024-11-25 10:39:27.597778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.323 [2024-11-25 10:39:27.597825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:33.323 [2024-11-25 10:39:27.597859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:33:33.323 [2024-11-25 10:39:27.597870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.323 [2024-11-25 10:39:27.598094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.323 [2024-11-25 10:39:27.598114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:33.323 [2024-11-25 10:39:27.598139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:33:33.323 [2024-11-25 10:39:27.598150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.323 [2024-11-25 10:39:27.615251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.323 [2024-11-25 10:39:27.615292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:33.323 [2024-11-25 10:39:27.615325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.058 ms 00:33:33.323 [2024-11-25 10:39:27.615335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.323 [2024-11-25 10:39:27.629742] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:33.323 [2024-11-25 10:39:27.629811] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:33.323 [2024-11-25 10:39:27.629846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.323 [2024-11-25 10:39:27.629857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:33.323 [2024-11-25 10:39:27.629868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.394 ms 00:33:33.323 [2024-11-25 10:39:27.629878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.655146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.655383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:33.583 [2024-11-25 10:39:27.655410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.199 ms 00:33:33.583 [2024-11-25 10:39:27.655422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.668822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.668866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:33.583 [2024-11-25 10:39:27.668897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.369 ms 00:33:33.583 [2024-11-25 10:39:27.668908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.681537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.681573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:33.583 [2024-11-25 10:39:27.681603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.590 ms 00:33:33.583 [2024-11-25 10:39:27.681613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.682413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.682446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:33.583 [2024-11-25 10:39:27.682466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:33:33.583 [2024-11-25 10:39:27.682477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.751172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.751513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:33.583 [2024-11-25 10:39:27.751551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.670 ms 00:33:33.583 [2024-11-25 10:39:27.751564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.762355] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:33.583 [2024-11-25 10:39:27.765037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.765067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:33.583 [2024-11-25 10:39:27.765098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.411 ms 00:33:33.583 [2024-11-25 10:39:27.765108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.765208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.765227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:33.583 [2024-11-25 10:39:27.765239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:33.583 [2024-11-25 10:39:27.765253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.767439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.767606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:33.583 [2024-11-25 10:39:27.767753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.134 ms 00:33:33.583 [2024-11-25 10:39:27.767822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.767954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.768002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:33.583 [2024-11-25 10:39:27.768039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:33.583 [2024-11-25 10:39:27.768134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.768224] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:33.583 [2024-11-25 10:39:27.768411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.768433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:33.583 [2024-11-25 10:39:27.768447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:33:33.583 [2024-11-25 10:39:27.768457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.794813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.795070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:33.583 [2024-11-25 10:39:27.795195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.321 ms 00:33:33.583 [2024-11-25 10:39:27.795247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.795453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.583 [2024-11-25 10:39:27.795582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:33.583 [2024-11-25 10:39:27.795685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:33.583 [2024-11-25 10:39:27.795732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.583 [2024-11-25 10:39:27.798946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.014 ms, result 0 00:33:34.972  [2024-11-25T10:39:30.241Z] Copying: 912/1048576 [kB] (912 kBps) [2024-11-25T10:39:31.177Z] Copying: 4972/1048576 [kB] (4060 kBps) [2024-11-25T10:39:32.129Z] Copying: 27/1024 [MB] (22 MBps) [2024-11-25T10:39:33.074Z] Copying: 54/1024 [MB] (27 MBps) [2024-11-25T10:39:34.010Z] Copying: 82/1024 [MB] (27 MBps) [2024-11-25T10:39:35.385Z] Copying: 110/1024 [MB] (28 MBps) [2024-11-25T10:39:36.320Z] Copying: 139/1024 [MB] (28 MBps) [2024-11-25T10:39:37.262Z] Copying: 167/1024 [MB] (28 MBps) [2024-11-25T10:39:38.198Z] Copying: 196/1024 [MB] (28 MBps) [2024-11-25T10:39:39.132Z] Copying: 223/1024 [MB] (27 MBps) [2024-11-25T10:39:40.085Z] Copying: 250/1024 [MB] (27 MBps) [2024-11-25T10:39:41.021Z] Copying: 278/1024 [MB] (27 MBps) [2024-11-25T10:39:42.400Z] Copying: 305/1024 [MB] (27 MBps) [2024-11-25T10:39:43.336Z] Copying: 332/1024 [MB] (27 MBps) [2024-11-25T10:39:44.273Z] Copying: 360/1024 [MB] (27 MBps) [2024-11-25T10:39:45.208Z] Copying: 388/1024 [MB] (27 MBps) [2024-11-25T10:39:46.143Z] Copying: 415/1024 [MB] (27 MBps) [2024-11-25T10:39:47.079Z] Copying: 443/1024 [MB] (27 MBps) [2024-11-25T10:39:48.015Z] Copying: 471/1024 [MB] (27 MBps) [2024-11-25T10:39:49.403Z] Copying: 499/1024 [MB] (27 MBps) [2024-11-25T10:39:50.343Z] Copying: 526/1024 [MB] (27 MBps) [2024-11-25T10:39:51.303Z] Copying: 554/1024 [MB] (27 MBps) [2024-11-25T10:39:52.238Z] Copying: 581/1024 [MB] (27 MBps) [2024-11-25T10:39:53.173Z] Copying: 609/1024 [MB] (27 MBps) [2024-11-25T10:39:54.111Z] Copying: 636/1024 [MB] (27 MBps) [2024-11-25T10:39:55.048Z] Copying: 663/1024 [MB] (27 MBps) [2024-11-25T10:39:56.424Z] Copying: 690/1024 [MB] (26 MBps) [2024-11-25T10:39:57.359Z] Copying: 717/1024 [MB] (27 MBps) [2024-11-25T10:39:58.297Z] Copying: 744/1024 [MB] (27 MBps) [2024-11-25T10:39:59.235Z] Copying: 771/1024 [MB] (26 MBps) [2024-11-25T10:40:00.170Z] Copying: 799/1024 [MB] (27 MBps) [2024-11-25T10:40:01.113Z] Copying: 825/1024 [MB] (26 MBps) [2024-11-25T10:40:02.048Z] Copying: 853/1024 [MB] (27 MBps) [2024-11-25T10:40:03.426Z] Copying: 879/1024 [MB] (26 MBps) [2024-11-25T10:40:03.993Z] Copying: 906/1024 [MB] (26 MBps) [2024-11-25T10:40:05.370Z] Copying: 932/1024 [MB] (26 MBps) [2024-11-25T10:40:06.308Z] Copying: 959/1024 [MB] (26 MBps) [2024-11-25T10:40:07.244Z] Copying: 985/1024 [MB] (26 MBps) [2024-11-25T10:40:07.503Z] Copying: 1012/1024 [MB] (26 MBps) [2024-11-25T10:40:07.762Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-25 10:40:07.615222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.615309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:13.429 [2024-11-25 10:40:07.615364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:13.429 [2024-11-25 10:40:07.615379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.429 [2024-11-25 10:40:07.615443] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:13.429 [2024-11-25 10:40:07.621649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.621833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:13.429 [2024-11-25 10:40:07.621958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.178 ms 00:34:13.429 [2024-11-25 10:40:07.622023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.429 [2024-11-25 10:40:07.622534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.622728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:13.429 [2024-11-25 10:40:07.622867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:34:13.429 [2024-11-25 10:40:07.622919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.429 [2024-11-25 10:40:07.634288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.634487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:13.429 [2024-11-25 10:40:07.634669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.191 ms 00:34:13.429 [2024-11-25 10:40:07.634720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.429 [2024-11-25 10:40:07.640781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.641023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:13.429 [2024-11-25 10:40:07.641152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.858 ms 00:34:13.429 [2024-11-25 10:40:07.641200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.429 [2024-11-25 10:40:07.669541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.669750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:13.429 [2024-11-25 10:40:07.669911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.120 ms 00:34:13.429 [2024-11-25 10:40:07.669959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.429 [2024-11-25 10:40:07.686136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.686306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:13.429 [2024-11-25 10:40:07.686488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.989 ms 00:34:13.429 [2024-11-25 10:40:07.686539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.429 [2024-11-25 10:40:07.688443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.688668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:13.429 [2024-11-25 10:40:07.688812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:34:13.429 [2024-11-25 10:40:07.688941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.429 [2024-11-25 10:40:07.715563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.715760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:13.429 [2024-11-25 10:40:07.715914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.563 ms 00:34:13.429 [2024-11-25 10:40:07.715962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.429 [2024-11-25 10:40:07.741895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.429 [2024-11-25 10:40:07.741933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:13.429 [2024-11-25 10:40:07.741978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.848 ms 00:34:13.429 [2024-11-25 10:40:07.741988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.690 [2024-11-25 10:40:07.768808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.690 [2024-11-25 10:40:07.768856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:13.690 [2024-11-25 10:40:07.768872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.781 ms 00:34:13.690 [2024-11-25 10:40:07.768881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.690 [2024-11-25 10:40:07.794233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.690 [2024-11-25 10:40:07.794270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:13.690 [2024-11-25 10:40:07.794301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.274 ms 00:34:13.690 [2024-11-25 10:40:07.794311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.690 [2024-11-25 10:40:07.794350] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:13.690 [2024-11-25 10:40:07.794378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:13.690 [2024-11-25 10:40:07.794391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:34:13.690 [2024-11-25 10:40:07.794401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:13.690 [2024-11-25 10:40:07.794621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.794998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:13.691 [2024-11-25 10:40:07.795676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:13.691 [2024-11-25 10:40:07.795687] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a2e69933-1e21-473c-805a-0a6646f66532 00:34:13.691 [2024-11-25 10:40:07.795699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:34:13.691 [2024-11-25 10:40:07.795708] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137920 00:34:13.691 [2024-11-25 10:40:07.795724] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135936 00:34:13.691 [2024-11-25 10:40:07.795736] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0146 00:34:13.691 [2024-11-25 10:40:07.795746] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:13.691 [2024-11-25 10:40:07.795756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:13.691 [2024-11-25 10:40:07.795766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:13.691 [2024-11-25 10:40:07.795786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:13.691 [2024-11-25 10:40:07.795795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:13.691 [2024-11-25 10:40:07.795805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.692 [2024-11-25 10:40:07.795816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:13.692 [2024-11-25 10:40:07.795826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.457 ms 00:34:13.692 [2024-11-25 10:40:07.795836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.692 [2024-11-25 10:40:07.810680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.692 [2024-11-25 10:40:07.810717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:13.692 [2024-11-25 10:40:07.810750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.821 ms 00:34:13.692 [2024-11-25 10:40:07.810761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.692 [2024-11-25 10:40:07.811323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:13.692 [2024-11-25 10:40:07.811497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:13.692 [2024-11-25 10:40:07.811522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:34:13.692 [2024-11-25 10:40:07.811533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.692 [2024-11-25 10:40:07.849729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.692 [2024-11-25 10:40:07.849944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:13.692 [2024-11-25 10:40:07.850066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.692 [2024-11-25 10:40:07.850115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.692 [2024-11-25 10:40:07.850198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.692 [2024-11-25 10:40:07.850326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:13.692 [2024-11-25 10:40:07.850369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.692 [2024-11-25 10:40:07.850404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.692 [2024-11-25 10:40:07.850540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.692 [2024-11-25 10:40:07.850637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:13.692 [2024-11-25 10:40:07.850684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.692 [2024-11-25 10:40:07.850721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.692 [2024-11-25 10:40:07.850845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.692 [2024-11-25 10:40:07.850933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:13.692 [2024-11-25 10:40:07.850977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.692 [2024-11-25 10:40:07.851058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.692 [2024-11-25 10:40:07.948383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.692 [2024-11-25 10:40:07.948692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:13.692 [2024-11-25 10:40:07.948880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.692 [2024-11-25 10:40:07.948947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.951 [2024-11-25 10:40:08.024442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.951 [2024-11-25 10:40:08.024700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:13.952 [2024-11-25 10:40:08.024891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.952 [2024-11-25 10:40:08.024957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.952 [2024-11-25 10:40:08.025136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.952 [2024-11-25 10:40:08.025201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:13.952 [2024-11-25 10:40:08.025389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.952 [2024-11-25 10:40:08.025469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.952 [2024-11-25 10:40:08.025582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.952 [2024-11-25 10:40:08.025686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:13.952 [2024-11-25 10:40:08.025804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.952 [2024-11-25 10:40:08.025866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.952 [2024-11-25 10:40:08.026031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.952 [2024-11-25 10:40:08.026088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:13.952 [2024-11-25 10:40:08.026139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.952 [2024-11-25 10:40:08.026189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.952 [2024-11-25 10:40:08.026357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.952 [2024-11-25 10:40:08.026455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:13.952 [2024-11-25 10:40:08.026500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.952 [2024-11-25 10:40:08.026640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.952 [2024-11-25 10:40:08.026721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.952 [2024-11-25 10:40:08.026841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:13.952 [2024-11-25 10:40:08.026963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.952 [2024-11-25 10:40:08.027025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.952 [2024-11-25 10:40:08.027195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:13.952 [2024-11-25 10:40:08.027277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:13.952 [2024-11-25 10:40:08.027476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:13.952 [2024-11-25 10:40:08.027526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:13.952 [2024-11-25 10:40:08.027711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.451 ms, result 0 00:34:14.889 00:34:14.889 00:34:14.889 10:40:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:16.792 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:16.792 10:40:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:16.792 [2024-11-25 10:40:10.923336] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:34:16.792 [2024-11-25 10:40:10.923796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83267 ] 00:34:16.792 [2024-11-25 10:40:11.118231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.050 [2024-11-25 10:40:11.269039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.309 [2024-11-25 10:40:11.622170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:17.309 [2024-11-25 10:40:11.622269] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:17.569 [2024-11-25 10:40:11.787901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.787955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:17.569 [2024-11-25 10:40:11.787991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:17.569 [2024-11-25 10:40:11.788001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.788058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.788078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:17.569 [2024-11-25 10:40:11.788093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:17.569 [2024-11-25 10:40:11.788103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.788129] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:17.569 [2024-11-25 10:40:11.788956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:17.569 [2024-11-25 10:40:11.788982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.788994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:17.569 [2024-11-25 10:40:11.789006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:34:17.569 [2024-11-25 10:40:11.789017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.791407] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:17.569 [2024-11-25 10:40:11.808636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.808696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:17.569 [2024-11-25 10:40:11.808729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.231 ms 00:34:17.569 [2024-11-25 10:40:11.808740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.808905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.808941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:17.569 [2024-11-25 10:40:11.808954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:34:17.569 [2024-11-25 10:40:11.808966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.819139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.819327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:17.569 [2024-11-25 10:40:11.819355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.081 ms 00:34:17.569 [2024-11-25 10:40:11.819368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.819490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.819508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:17.569 [2024-11-25 10:40:11.819520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:34:17.569 [2024-11-25 10:40:11.819530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.819607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.819625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:17.569 [2024-11-25 10:40:11.819637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:17.569 [2024-11-25 10:40:11.819662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.819692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:17.569 [2024-11-25 10:40:11.824328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.824364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:17.569 [2024-11-25 10:40:11.824379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:34:17.569 [2024-11-25 10:40:11.824393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.824426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.824444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:17.569 [2024-11-25 10:40:11.824455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:17.569 [2024-11-25 10:40:11.824464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.824508] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:17.569 [2024-11-25 10:40:11.824541] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:17.569 [2024-11-25 10:40:11.824585] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:17.569 [2024-11-25 10:40:11.824612] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:17.569 [2024-11-25 10:40:11.824712] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:17.569 [2024-11-25 10:40:11.824725] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:17.569 [2024-11-25 10:40:11.824737] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:17.569 [2024-11-25 10:40:11.824749] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:17.569 [2024-11-25 10:40:11.824759] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:17.569 [2024-11-25 10:40:11.824817] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:17.569 [2024-11-25 10:40:11.824829] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:17.569 [2024-11-25 10:40:11.824849] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:17.569 [2024-11-25 10:40:11.824859] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:17.569 [2024-11-25 10:40:11.824883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.824902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:17.569 [2024-11-25 10:40:11.824912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:34:17.569 [2024-11-25 10:40:11.824922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.825018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.569 [2024-11-25 10:40:11.825037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:17.569 [2024-11-25 10:40:11.825047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:17.569 [2024-11-25 10:40:11.825057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.569 [2024-11-25 10:40:11.825173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:17.569 [2024-11-25 10:40:11.825195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:17.569 [2024-11-25 10:40:11.825206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:17.569 [2024-11-25 10:40:11.825215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:17.569 [2024-11-25 10:40:11.825225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:17.569 [2024-11-25 10:40:11.825233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:17.569 [2024-11-25 10:40:11.825242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:17.569 [2024-11-25 10:40:11.825253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:17.569 [2024-11-25 10:40:11.825262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:17.569 [2024-11-25 10:40:11.825270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:17.569 [2024-11-25 10:40:11.825278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:17.569 [2024-11-25 10:40:11.825287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:17.569 [2024-11-25 10:40:11.825298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:17.569 [2024-11-25 10:40:11.825310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:17.569 [2024-11-25 10:40:11.825324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:17.569 [2024-11-25 10:40:11.825343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:17.569 [2024-11-25 10:40:11.825352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:17.569 [2024-11-25 10:40:11.825361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:17.569 [2024-11-25 10:40:11.825370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:17.570 [2024-11-25 10:40:11.825383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:17.570 [2024-11-25 10:40:11.825391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:17.570 [2024-11-25 10:40:11.825400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:17.570 [2024-11-25 10:40:11.825417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:17.570 [2024-11-25 10:40:11.825426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:17.570 [2024-11-25 10:40:11.825434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:17.570 [2024-11-25 10:40:11.825442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:17.570 [2024-11-25 10:40:11.825451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:17.570 [2024-11-25 10:40:11.825459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:17.570 [2024-11-25 10:40:11.825467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:17.570 [2024-11-25 10:40:11.825476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:17.570 [2024-11-25 10:40:11.825484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:17.570 [2024-11-25 10:40:11.825492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:17.570 [2024-11-25 10:40:11.825501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:17.570 [2024-11-25 10:40:11.825510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:17.570 [2024-11-25 10:40:11.825519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:17.570 [2024-11-25 10:40:11.825528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:17.570 [2024-11-25 10:40:11.825536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:17.570 [2024-11-25 10:40:11.825544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:17.570 [2024-11-25 10:40:11.825553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:17.570 [2024-11-25 10:40:11.825561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:17.570 [2024-11-25 10:40:11.825569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:17.570 [2024-11-25 10:40:11.825578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:17.570 [2024-11-25 10:40:11.825587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:17.570 [2024-11-25 10:40:11.825595] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:17.570 [2024-11-25 10:40:11.825605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:17.570 [2024-11-25 10:40:11.825615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:17.570 [2024-11-25 10:40:11.825624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:17.570 [2024-11-25 10:40:11.825642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:17.570 [2024-11-25 10:40:11.825652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:17.570 [2024-11-25 10:40:11.825661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:17.570 [2024-11-25 10:40:11.825669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:17.570 [2024-11-25 10:40:11.825677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:17.570 [2024-11-25 10:40:11.825686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:17.570 [2024-11-25 10:40:11.825696] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:17.570 [2024-11-25 10:40:11.825711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:17.570 [2024-11-25 10:40:11.825722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:17.570 [2024-11-25 10:40:11.825735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:17.570 [2024-11-25 10:40:11.825744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:17.570 [2024-11-25 10:40:11.825753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:17.570 [2024-11-25 10:40:11.825762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:17.570 [2024-11-25 10:40:11.825771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:17.570 [2024-11-25 10:40:11.825780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:17.570 [2024-11-25 10:40:11.825788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:17.570 [2024-11-25 10:40:11.825810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:17.570 [2024-11-25 10:40:11.825824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:17.570 [2024-11-25 10:40:11.825833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:17.570 [2024-11-25 10:40:11.825843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:17.570 [2024-11-25 10:40:11.825853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:17.570 [2024-11-25 10:40:11.825862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:17.570 [2024-11-25 10:40:11.825871] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:17.570 [2024-11-25 10:40:11.825887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:17.570 [2024-11-25 10:40:11.825898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:17.570 [2024-11-25 10:40:11.825908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:17.570 [2024-11-25 10:40:11.825918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:17.570 [2024-11-25 10:40:11.825927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:17.570 [2024-11-25 10:40:11.825937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.570 [2024-11-25 10:40:11.825946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:17.570 [2024-11-25 10:40:11.825957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:34:17.570 [2024-11-25 10:40:11.825967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.570 [2024-11-25 10:40:11.864791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.570 [2024-11-25 10:40:11.865097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:17.570 [2024-11-25 10:40:11.865238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.767 ms 00:34:17.570 [2024-11-25 10:40:11.865286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.570 [2024-11-25 10:40:11.865460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.570 [2024-11-25 10:40:11.865505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:17.570 [2024-11-25 10:40:11.865611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:34:17.570 [2024-11-25 10:40:11.865661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:11.916380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:11.916575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:17.839 [2024-11-25 10:40:11.916745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.550 ms 00:34:17.839 [2024-11-25 10:40:11.916842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:11.917038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:11.917106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:17.839 [2024-11-25 10:40:11.917287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:17.839 [2024-11-25 10:40:11.917377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:11.918285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:11.918452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:17.839 [2024-11-25 10:40:11.918588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:34:17.839 [2024-11-25 10:40:11.918726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:11.918986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:11.919068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:17.839 [2024-11-25 10:40:11.919273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:34:17.839 [2024-11-25 10:40:11.919333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:11.937007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:11.937224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:17.839 [2024-11-25 10:40:11.937374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.546 ms 00:34:17.839 [2024-11-25 10:40:11.937454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:11.952156] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:17.839 [2024-11-25 10:40:11.952195] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:17.839 [2024-11-25 10:40:11.952221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:11.952241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:17.839 [2024-11-25 10:40:11.952252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.446 ms 00:34:17.839 [2024-11-25 10:40:11.952261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:11.976486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:11.976532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:17.839 [2024-11-25 10:40:11.976548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.182 ms 00:34:17.839 [2024-11-25 10:40:11.976559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:11.989311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:11.989350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:17.839 [2024-11-25 10:40:11.989364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.702 ms 00:34:17.839 [2024-11-25 10:40:11.989373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:12.002358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:12.002396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:17.839 [2024-11-25 10:40:12.002411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.946 ms 00:34:17.839 [2024-11-25 10:40:12.002421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:12.003299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:12.003349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:17.839 [2024-11-25 10:40:12.003381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:34:17.839 [2024-11-25 10:40:12.003397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:12.077022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:12.077115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:17.839 [2024-11-25 10:40:12.077143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.574 ms 00:34:17.839 [2024-11-25 10:40:12.077155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.839 [2024-11-25 10:40:12.088179] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:17.839 [2024-11-25 10:40:12.090825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.839 [2024-11-25 10:40:12.090860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:17.839 [2024-11-25 10:40:12.090908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.607 ms 00:34:17.839 [2024-11-25 10:40:12.090919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.840 [2024-11-25 10:40:12.091041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.840 [2024-11-25 10:40:12.091060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:17.840 [2024-11-25 10:40:12.091073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:17.840 [2024-11-25 10:40:12.091087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.840 [2024-11-25 10:40:12.092151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.840 [2024-11-25 10:40:12.092188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:17.840 [2024-11-25 10:40:12.092203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:34:17.840 [2024-11-25 10:40:12.092213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.840 [2024-11-25 10:40:12.092263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.840 [2024-11-25 10:40:12.092278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:17.840 [2024-11-25 10:40:12.092289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:17.840 [2024-11-25 10:40:12.092299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.840 [2024-11-25 10:40:12.092344] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:17.840 [2024-11-25 10:40:12.092368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.840 [2024-11-25 10:40:12.092379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:17.840 [2024-11-25 10:40:12.092404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:17.840 [2024-11-25 10:40:12.092414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.840 [2024-11-25 10:40:12.118433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.840 [2024-11-25 10:40:12.118474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:17.840 [2024-11-25 10:40:12.118491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.980 ms 00:34:17.840 [2024-11-25 10:40:12.118507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.840 [2024-11-25 10:40:12.118633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.840 [2024-11-25 10:40:12.118652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:17.840 [2024-11-25 10:40:12.118664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:34:17.840 [2024-11-25 10:40:12.118675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.840 [2024-11-25 10:40:12.120324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.783 ms, result 0 00:34:19.237  [2024-11-25T10:40:14.507Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-25T10:40:15.443Z] Copying: 43/1024 [MB] (21 MBps) [2024-11-25T10:40:16.379Z] Copying: 65/1024 [MB] (21 MBps) [2024-11-25T10:40:17.313Z] Copying: 87/1024 [MB] (21 MBps) [2024-11-25T10:40:18.690Z] Copying: 109/1024 [MB] (22 MBps) [2024-11-25T10:40:19.627Z] Copying: 131/1024 [MB] (21 MBps) [2024-11-25T10:40:20.563Z] Copying: 152/1024 [MB] (21 MBps) [2024-11-25T10:40:21.498Z] Copying: 175/1024 [MB] (22 MBps) [2024-11-25T10:40:22.449Z] Copying: 196/1024 [MB] (21 MBps) [2024-11-25T10:40:23.384Z] Copying: 219/1024 [MB] (22 MBps) [2024-11-25T10:40:24.320Z] Copying: 240/1024 [MB] (21 MBps) [2024-11-25T10:40:25.695Z] Copying: 262/1024 [MB] (21 MBps) [2024-11-25T10:40:26.630Z] Copying: 284/1024 [MB] (22 MBps) [2024-11-25T10:40:27.565Z] Copying: 307/1024 [MB] (22 MBps) [2024-11-25T10:40:28.501Z] Copying: 329/1024 [MB] (22 MBps) [2024-11-25T10:40:29.438Z] Copying: 351/1024 [MB] (22 MBps) [2024-11-25T10:40:30.374Z] Copying: 373/1024 [MB] (22 MBps) [2024-11-25T10:40:31.308Z] Copying: 395/1024 [MB] (21 MBps) [2024-11-25T10:40:32.685Z] Copying: 417/1024 [MB] (22 MBps) [2024-11-25T10:40:33.622Z] Copying: 438/1024 [MB] (21 MBps) [2024-11-25T10:40:34.558Z] Copying: 463/1024 [MB] (24 MBps) [2024-11-25T10:40:35.493Z] Copying: 486/1024 [MB] (23 MBps) [2024-11-25T10:40:36.428Z] Copying: 509/1024 [MB] (23 MBps) [2024-11-25T10:40:37.361Z] Copying: 532/1024 [MB] (22 MBps) [2024-11-25T10:40:38.734Z] Copying: 554/1024 [MB] (21 MBps) [2024-11-25T10:40:39.668Z] Copying: 577/1024 [MB] (22 MBps) [2024-11-25T10:40:40.604Z] Copying: 599/1024 [MB] (22 MBps) [2024-11-25T10:40:41.540Z] Copying: 621/1024 [MB] (21 MBps) [2024-11-25T10:40:42.476Z] Copying: 643/1024 [MB] (22 MBps) [2024-11-25T10:40:43.412Z] Copying: 665/1024 [MB] (22 MBps) [2024-11-25T10:40:44.349Z] Copying: 687/1024 [MB] (22 MBps) [2024-11-25T10:40:45.725Z] Copying: 710/1024 [MB] (22 MBps) [2024-11-25T10:40:46.661Z] Copying: 732/1024 [MB] (21 MBps) [2024-11-25T10:40:47.597Z] Copying: 754/1024 [MB] (22 MBps) [2024-11-25T10:40:48.535Z] Copying: 776/1024 [MB] (22 MBps) [2024-11-25T10:40:49.469Z] Copying: 798/1024 [MB] (22 MBps) [2024-11-25T10:40:50.405Z] Copying: 821/1024 [MB] (22 MBps) [2024-11-25T10:40:51.340Z] Copying: 844/1024 [MB] (23 MBps) [2024-11-25T10:40:52.716Z] Copying: 867/1024 [MB] (22 MBps) [2024-11-25T10:40:53.653Z] Copying: 889/1024 [MB] (22 MBps) [2024-11-25T10:40:54.598Z] Copying: 912/1024 [MB] (22 MBps) [2024-11-25T10:40:55.562Z] Copying: 934/1024 [MB] (22 MBps) [2024-11-25T10:40:56.498Z] Copying: 958/1024 [MB] (23 MBps) [2024-11-25T10:40:57.435Z] Copying: 980/1024 [MB] (22 MBps) [2024-11-25T10:40:58.369Z] Copying: 1003/1024 [MB] (22 MBps) [2024-11-25T10:40:58.369Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-25 10:40:58.354154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.036 [2024-11-25 10:40:58.354264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:04.036 [2024-11-25 10:40:58.354305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:04.036 [2024-11-25 10:40:58.354317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.036 [2024-11-25 10:40:58.354352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:04.036 [2024-11-25 10:40:58.358794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.036 [2024-11-25 10:40:58.358834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:04.036 [2024-11-25 10:40:58.358868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.411 ms 00:35:04.036 [2024-11-25 10:40:58.358880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.036 [2024-11-25 10:40:58.359201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.036 [2024-11-25 10:40:58.359235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:04.036 [2024-11-25 10:40:58.359260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:35:04.036 [2024-11-25 10:40:58.359272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.036 [2024-11-25 10:40:58.363024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.036 [2024-11-25 10:40:58.363085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:04.036 [2024-11-25 10:40:58.363130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:35:04.036 [2024-11-25 10:40:58.363140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.296 [2024-11-25 10:40:58.369585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.296 [2024-11-25 10:40:58.369615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:04.296 [2024-11-25 10:40:58.369666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.415 ms 00:35:04.296 [2024-11-25 10:40:58.369676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.296 [2024-11-25 10:40:58.397860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.296 [2024-11-25 10:40:58.397900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:04.296 [2024-11-25 10:40:58.397933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.095 ms 00:35:04.296 [2024-11-25 10:40:58.397943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.296 [2024-11-25 10:40:58.413861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.296 [2024-11-25 10:40:58.413901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:04.296 [2024-11-25 10:40:58.413933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.874 ms 00:35:04.296 [2024-11-25 10:40:58.413945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.296 [2024-11-25 10:40:58.415837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.296 [2024-11-25 10:40:58.415884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:04.296 [2024-11-25 10:40:58.415917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:35:04.296 [2024-11-25 10:40:58.415928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.296 [2024-11-25 10:40:58.443364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.296 [2024-11-25 10:40:58.443589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:04.296 [2024-11-25 10:40:58.443617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.414 ms 00:35:04.296 [2024-11-25 10:40:58.443629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.296 [2024-11-25 10:40:58.470416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.296 [2024-11-25 10:40:58.470656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:04.296 [2024-11-25 10:40:58.470686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.743 ms 00:35:04.296 [2024-11-25 10:40:58.470699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.296 [2024-11-25 10:40:58.496844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.296 [2024-11-25 10:40:58.497063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:04.296 [2024-11-25 10:40:58.497091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.099 ms 00:35:04.296 [2024-11-25 10:40:58.497104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.296 [2024-11-25 10:40:58.523460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.296 [2024-11-25 10:40:58.523686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:04.296 [2024-11-25 10:40:58.523714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.261 ms 00:35:04.296 [2024-11-25 10:40:58.523725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.296 [2024-11-25 10:40:58.523769] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:04.296 [2024-11-25 10:40:58.523845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:04.296 [2024-11-25 10:40:58.523868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:35:04.296 [2024-11-25 10:40:58.523881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.523999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.524011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:04.296 [2024-11-25 10:40:58.524023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.524997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:04.297 [2024-11-25 10:40:58.525017] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:04.297 [2024-11-25 10:40:58.525033] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a2e69933-1e21-473c-805a-0a6646f66532 00:35:04.297 [2024-11-25 10:40:58.525044] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:35:04.297 [2024-11-25 10:40:58.525055] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:04.297 [2024-11-25 10:40:58.525066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:04.298 [2024-11-25 10:40:58.525077] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:04.298 [2024-11-25 10:40:58.525087] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:04.298 [2024-11-25 10:40:58.525098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:04.298 [2024-11-25 10:40:58.525120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:04.298 [2024-11-25 10:40:58.525130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:04.298 [2024-11-25 10:40:58.525140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:04.298 [2024-11-25 10:40:58.525151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.298 [2024-11-25 10:40:58.525161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:04.298 [2024-11-25 10:40:58.525173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:35:04.298 [2024-11-25 10:40:58.525184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.298 [2024-11-25 10:40:58.540588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.298 [2024-11-25 10:40:58.540625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:04.298 [2024-11-25 10:40:58.540656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.357 ms 00:35:04.298 [2024-11-25 10:40:58.540666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.298 [2024-11-25 10:40:58.541200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.298 [2024-11-25 10:40:58.541266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:04.298 [2024-11-25 10:40:58.541288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:35:04.298 [2024-11-25 10:40:58.541299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.298 [2024-11-25 10:40:58.580673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.298 [2024-11-25 10:40:58.580716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:04.298 [2024-11-25 10:40:58.580748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.298 [2024-11-25 10:40:58.580773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.298 [2024-11-25 10:40:58.580887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.298 [2024-11-25 10:40:58.580904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:04.298 [2024-11-25 10:40:58.580923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.298 [2024-11-25 10:40:58.580933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.298 [2024-11-25 10:40:58.581025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.298 [2024-11-25 10:40:58.581043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:04.298 [2024-11-25 10:40:58.581055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.298 [2024-11-25 10:40:58.581065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.298 [2024-11-25 10:40:58.581119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.298 [2024-11-25 10:40:58.581149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:04.298 [2024-11-25 10:40:58.581160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.298 [2024-11-25 10:40:58.581178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.557 [2024-11-25 10:40:58.689813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.557 [2024-11-25 10:40:58.689871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:04.557 [2024-11-25 10:40:58.689905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.557 [2024-11-25 10:40:58.689916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.557 [2024-11-25 10:40:58.768598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.557 [2024-11-25 10:40:58.768658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:04.557 [2024-11-25 10:40:58.768692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.557 [2024-11-25 10:40:58.768711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.557 [2024-11-25 10:40:58.769033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.557 [2024-11-25 10:40:58.769065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:04.557 [2024-11-25 10:40:58.769079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.557 [2024-11-25 10:40:58.769090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.557 [2024-11-25 10:40:58.769146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.557 [2024-11-25 10:40:58.769163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:04.557 [2024-11-25 10:40:58.769176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.557 [2024-11-25 10:40:58.769187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.557 [2024-11-25 10:40:58.769349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.557 [2024-11-25 10:40:58.769370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:04.557 [2024-11-25 10:40:58.769383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.557 [2024-11-25 10:40:58.769395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.557 [2024-11-25 10:40:58.769451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.557 [2024-11-25 10:40:58.769469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:04.557 [2024-11-25 10:40:58.769481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.557 [2024-11-25 10:40:58.769492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.557 [2024-11-25 10:40:58.769574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.557 [2024-11-25 10:40:58.769604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:04.557 [2024-11-25 10:40:58.769615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.557 [2024-11-25 10:40:58.769626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.557 [2024-11-25 10:40:58.769676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.557 [2024-11-25 10:40:58.769692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:04.557 [2024-11-25 10:40:58.769704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.557 [2024-11-25 10:40:58.769714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.557 [2024-11-25 10:40:58.769876] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 415.673 ms, result 0 00:35:05.491 00:35:05.491 00:35:05.491 10:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:35:07.393 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:35:07.393 10:41:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:35:07.393 10:41:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:35:07.393 10:41:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:07.393 10:41:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:07.393 10:41:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:35:07.652 10:41:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:07.652 10:41:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:35:07.652 10:41:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81315 00:35:07.652 10:41:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81315 ']' 00:35:07.652 10:41:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81315 00:35:07.652 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81315) - No such process 00:35:07.652 Process with pid 81315 is not found 00:35:07.652 10:41:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81315 is not found' 00:35:07.652 10:41:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:35:07.910 Remove shared memory files 00:35:07.910 10:41:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:35:07.910 10:41:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:07.910 10:41:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:35:07.910 10:41:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:35:07.910 10:41:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:35:07.910 10:41:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:07.910 10:41:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:35:07.910 ************************************ 00:35:07.910 END TEST ftl_dirty_shutdown 00:35:07.910 ************************************ 00:35:07.910 00:35:07.910 real 4m5.860s 00:35:07.910 user 4m47.745s 00:35:07.910 sys 0m40.018s 00:35:07.910 10:41:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.910 10:41:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:07.910 10:41:02 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:35:07.910 10:41:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:07.910 10:41:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.910 10:41:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:07.910 ************************************ 00:35:07.910 START TEST ftl_upgrade_shutdown 00:35:07.910 ************************************ 00:35:07.910 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:35:08.170 * Looking for test storage... 00:35:08.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:08.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.170 --rc genhtml_branch_coverage=1 00:35:08.170 --rc genhtml_function_coverage=1 00:35:08.170 --rc genhtml_legend=1 00:35:08.170 --rc geninfo_all_blocks=1 00:35:08.170 --rc geninfo_unexecuted_blocks=1 00:35:08.170 00:35:08.170 ' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:08.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.170 --rc genhtml_branch_coverage=1 00:35:08.170 --rc genhtml_function_coverage=1 00:35:08.170 --rc genhtml_legend=1 00:35:08.170 --rc geninfo_all_blocks=1 00:35:08.170 --rc geninfo_unexecuted_blocks=1 00:35:08.170 00:35:08.170 ' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:08.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.170 --rc genhtml_branch_coverage=1 00:35:08.170 --rc genhtml_function_coverage=1 00:35:08.170 --rc genhtml_legend=1 00:35:08.170 --rc geninfo_all_blocks=1 00:35:08.170 --rc geninfo_unexecuted_blocks=1 00:35:08.170 00:35:08.170 ' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:08.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.170 --rc genhtml_branch_coverage=1 00:35:08.170 --rc genhtml_function_coverage=1 00:35:08.170 --rc genhtml_legend=1 00:35:08.170 --rc geninfo_all_blocks=1 00:35:08.170 --rc geninfo_unexecuted_blocks=1 00:35:08.170 00:35:08.170 ' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:35:08.170 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:08.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83834 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83834 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83834 ']' 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.171 10:41:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:08.430 [2024-11-25 10:41:02.530241] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:35:08.430 [2024-11-25 10:41:02.530833] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83834 ] 00:35:08.430 [2024-11-25 10:41:02.719548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.688 [2024-11-25 10:41:02.870856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:35:09.626 10:41:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:35:09.884 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:35:09.884 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:35:09.884 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:35:09.884 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:35:09.884 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:09.884 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:35:09.884 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:35:09.884 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:35:10.143 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:10.143 { 00:35:10.143 "name": "basen1", 00:35:10.143 "aliases": [ 00:35:10.143 "9a81295b-4c66-4f08-a08e-3ae162c06d8b" 00:35:10.143 ], 00:35:10.143 "product_name": "NVMe disk", 00:35:10.143 "block_size": 4096, 00:35:10.143 "num_blocks": 1310720, 00:35:10.143 "uuid": "9a81295b-4c66-4f08-a08e-3ae162c06d8b", 00:35:10.143 "numa_id": -1, 00:35:10.143 "assigned_rate_limits": { 00:35:10.143 "rw_ios_per_sec": 0, 00:35:10.143 "rw_mbytes_per_sec": 0, 00:35:10.143 "r_mbytes_per_sec": 0, 00:35:10.143 "w_mbytes_per_sec": 0 00:35:10.143 }, 00:35:10.143 "claimed": true, 00:35:10.143 "claim_type": "read_many_write_one", 00:35:10.143 "zoned": false, 00:35:10.143 "supported_io_types": { 00:35:10.143 "read": true, 00:35:10.143 "write": true, 00:35:10.143 "unmap": true, 00:35:10.143 "flush": true, 00:35:10.143 "reset": true, 00:35:10.143 "nvme_admin": true, 00:35:10.143 "nvme_io": true, 00:35:10.143 "nvme_io_md": false, 00:35:10.143 "write_zeroes": true, 00:35:10.143 "zcopy": false, 00:35:10.143 "get_zone_info": false, 00:35:10.143 "zone_management": false, 00:35:10.143 "zone_append": false, 00:35:10.143 "compare": true, 00:35:10.143 "compare_and_write": false, 00:35:10.143 "abort": true, 00:35:10.143 "seek_hole": false, 00:35:10.144 "seek_data": false, 00:35:10.144 "copy": true, 00:35:10.144 "nvme_iov_md": false 00:35:10.144 }, 00:35:10.144 "driver_specific": { 00:35:10.144 "nvme": [ 00:35:10.144 { 00:35:10.144 "pci_address": "0000:00:11.0", 00:35:10.144 "trid": { 00:35:10.144 "trtype": "PCIe", 00:35:10.144 "traddr": "0000:00:11.0" 00:35:10.144 }, 00:35:10.144 "ctrlr_data": { 00:35:10.144 "cntlid": 0, 00:35:10.144 "vendor_id": "0x1b36", 00:35:10.144 "model_number": "QEMU NVMe Ctrl", 00:35:10.144 "serial_number": "12341", 00:35:10.144 "firmware_revision": "8.0.0", 00:35:10.144 "subnqn": "nqn.2019-08.org.qemu:12341", 00:35:10.144 "oacs": { 00:35:10.144 "security": 0, 00:35:10.144 "format": 1, 00:35:10.144 "firmware": 0, 00:35:10.144 "ns_manage": 1 00:35:10.144 }, 00:35:10.144 "multi_ctrlr": false, 00:35:10.144 "ana_reporting": false 00:35:10.144 }, 00:35:10.144 "vs": { 00:35:10.144 "nvme_version": "1.4" 00:35:10.144 }, 00:35:10.144 "ns_data": { 00:35:10.144 "id": 1, 00:35:10.144 "can_share": false 00:35:10.144 } 00:35:10.144 } 00:35:10.144 ], 00:35:10.144 "mp_policy": "active_passive" 00:35:10.144 } 00:35:10.144 } 00:35:10.144 ]' 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:10.144 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:10.403 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=b705a355-d535-4e51-9650-a791a8356396 00:35:10.403 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:35:10.403 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b705a355-d535-4e51-9650-a791a8356396 00:35:10.969 10:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:35:10.969 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f58de5d1-0229-4454-b29c-f90e12108c03 00:35:10.970 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f58de5d1-0229-4454-b29c-f90e12108c03 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a8ff1588-81b4-4fa4-8972-024f2f40c8af 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a8ff1588-81b4-4fa4-8972-024f2f40c8af ]] 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a8ff1588-81b4-4fa4-8972-024f2f40c8af 5120 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a8ff1588-81b4-4fa4-8972-024f2f40c8af 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a8ff1588-81b4-4fa4-8972-024f2f40c8af 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a8ff1588-81b4-4fa4-8972-024f2f40c8af 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8ff1588-81b4-4fa4-8972-024f2f40c8af 00:35:11.536 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:11.536 { 00:35:11.536 "name": "a8ff1588-81b4-4fa4-8972-024f2f40c8af", 00:35:11.536 "aliases": [ 00:35:11.536 "lvs/basen1p0" 00:35:11.536 ], 00:35:11.536 "product_name": "Logical Volume", 00:35:11.536 "block_size": 4096, 00:35:11.537 "num_blocks": 5242880, 00:35:11.537 "uuid": "a8ff1588-81b4-4fa4-8972-024f2f40c8af", 00:35:11.537 "assigned_rate_limits": { 00:35:11.537 "rw_ios_per_sec": 0, 00:35:11.537 "rw_mbytes_per_sec": 0, 00:35:11.537 "r_mbytes_per_sec": 0, 00:35:11.537 "w_mbytes_per_sec": 0 00:35:11.537 }, 00:35:11.537 "claimed": false, 00:35:11.537 "zoned": false, 00:35:11.537 "supported_io_types": { 00:35:11.537 "read": true, 00:35:11.537 "write": true, 00:35:11.537 "unmap": true, 00:35:11.537 "flush": false, 00:35:11.537 "reset": true, 00:35:11.537 "nvme_admin": false, 00:35:11.537 "nvme_io": false, 00:35:11.537 "nvme_io_md": false, 00:35:11.537 "write_zeroes": true, 00:35:11.537 "zcopy": false, 00:35:11.537 "get_zone_info": false, 00:35:11.537 "zone_management": false, 00:35:11.537 "zone_append": false, 00:35:11.537 "compare": false, 00:35:11.537 "compare_and_write": false, 00:35:11.537 "abort": false, 00:35:11.537 "seek_hole": true, 00:35:11.537 "seek_data": true, 00:35:11.537 "copy": false, 00:35:11.537 "nvme_iov_md": false 00:35:11.537 }, 00:35:11.537 "driver_specific": { 00:35:11.537 "lvol": { 00:35:11.537 "lvol_store_uuid": "f58de5d1-0229-4454-b29c-f90e12108c03", 00:35:11.537 "base_bdev": "basen1", 00:35:11.537 "thin_provision": true, 00:35:11.537 "num_allocated_clusters": 0, 00:35:11.537 "snapshot": false, 00:35:11.537 "clone": false, 00:35:11.537 "esnap_clone": false 00:35:11.537 } 00:35:11.537 } 00:35:11.537 } 00:35:11.537 ]' 00:35:11.537 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:11.537 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:35:11.537 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:11.795 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:35:11.795 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:35:11.795 10:41:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:35:11.795 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:35:11.795 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:35:11.795 10:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:35:12.054 10:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:35:12.054 10:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:35:12.054 10:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:35:12.313 10:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:35:12.313 10:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:35:12.313 10:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a8ff1588-81b4-4fa4-8972-024f2f40c8af -c cachen1p0 --l2p_dram_limit 2 00:35:12.573 [2024-11-25 10:41:06.747614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.747920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:12.573 [2024-11-25 10:41:06.747961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:12.573 [2024-11-25 10:41:06.747977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.748085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.748106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:12.573 [2024-11-25 10:41:06.748123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:35:12.573 [2024-11-25 10:41:06.748136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.748171] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:12.573 [2024-11-25 10:41:06.749314] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:12.573 [2024-11-25 10:41:06.749355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.749370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:12.573 [2024-11-25 10:41:06.749387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.189 ms 00:35:12.573 [2024-11-25 10:41:06.749399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.749552] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 4be8c413-8bff-44f0-99f1-add31d9d010b 00:35:12.573 [2024-11-25 10:41:06.751618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.751843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:35:12.573 [2024-11-25 10:41:06.751873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:35:12.573 [2024-11-25 10:41:06.751890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.762127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.762177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:12.573 [2024-11-25 10:41:06.762216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.135 ms 00:35:12.573 [2024-11-25 10:41:06.762231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.762297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.762318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:12.573 [2024-11-25 10:41:06.762332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:35:12.573 [2024-11-25 10:41:06.762349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.762444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.762467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:12.573 [2024-11-25 10:41:06.762481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:35:12.573 [2024-11-25 10:41:06.762503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.762553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:12.573 [2024-11-25 10:41:06.767951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.767995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:12.573 [2024-11-25 10:41:06.768033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.421 ms 00:35:12.573 [2024-11-25 10:41:06.768046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.768087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.768105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:12.573 [2024-11-25 10:41:06.768121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:12.573 [2024-11-25 10:41:06.768133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.768197] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:35:12.573 [2024-11-25 10:41:06.768352] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:12.573 [2024-11-25 10:41:06.768376] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:12.573 [2024-11-25 10:41:06.768392] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:35:12.573 [2024-11-25 10:41:06.768409] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:12.573 [2024-11-25 10:41:06.768424] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:12.573 [2024-11-25 10:41:06.768439] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:12.573 [2024-11-25 10:41:06.768451] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:12.573 [2024-11-25 10:41:06.768468] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:12.573 [2024-11-25 10:41:06.768479] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:12.573 [2024-11-25 10:41:06.768494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.768506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:12.573 [2024-11-25 10:41:06.768521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 00:35:12.573 [2024-11-25 10:41:06.768533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.768628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-25 10:41:06.768643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:12.573 [2024-11-25 10:41:06.768658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:35:12.573 [2024-11-25 10:41:06.768682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-25 10:41:06.768840] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:12.574 [2024-11-25 10:41:06.768881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:12.574 [2024-11-25 10:41:06.768898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:12.574 [2024-11-25 10:41:06.768911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.768928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:12.574 [2024-11-25 10:41:06.768939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.768954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:12.574 [2024-11-25 10:41:06.768966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:12.574 [2024-11-25 10:41:06.768980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:12.574 [2024-11-25 10:41:06.768991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:12.574 [2024-11-25 10:41:06.769018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:12.574 [2024-11-25 10:41:06.769033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:12.574 [2024-11-25 10:41:06.769058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:12.574 [2024-11-25 10:41:06.769070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:12.574 [2024-11-25 10:41:06.769100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:12.574 [2024-11-25 10:41:06.769114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:12.574 [2024-11-25 10:41:06.769141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:12.574 [2024-11-25 10:41:06.769153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:12.574 [2024-11-25 10:41:06.769166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:12.574 [2024-11-25 10:41:06.769178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:12.574 [2024-11-25 10:41:06.769192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:12.574 [2024-11-25 10:41:06.769204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:12.574 [2024-11-25 10:41:06.769218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:12.574 [2024-11-25 10:41:06.769229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:12.574 [2024-11-25 10:41:06.769243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:12.574 [2024-11-25 10:41:06.769255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:12.574 [2024-11-25 10:41:06.769269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:12.574 [2024-11-25 10:41:06.769280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:12.574 [2024-11-25 10:41:06.769297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:12.574 [2024-11-25 10:41:06.769309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:12.574 [2024-11-25 10:41:06.769334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:12.574 [2024-11-25 10:41:06.769348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:12.574 [2024-11-25 10:41:06.769374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:12.574 [2024-11-25 10:41:06.769411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:12.574 [2024-11-25 10:41:06.769426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769437] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:12.574 [2024-11-25 10:41:06.769454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:12.574 [2024-11-25 10:41:06.769467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:12.574 [2024-11-25 10:41:06.769482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.574 [2024-11-25 10:41:06.769495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:12.574 [2024-11-25 10:41:06.769512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:12.574 [2024-11-25 10:41:06.769524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:12.574 [2024-11-25 10:41:06.769538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:12.574 [2024-11-25 10:41:06.769550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:12.574 [2024-11-25 10:41:06.769565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:12.574 [2024-11-25 10:41:06.769583] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:12.574 [2024-11-25 10:41:06.769601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:12.574 [2024-11-25 10:41:06.769633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:12.574 [2024-11-25 10:41:06.769673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:12.574 [2024-11-25 10:41:06.769688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:12.574 [2024-11-25 10:41:06.769700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:12.574 [2024-11-25 10:41:06.769714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:12.574 [2024-11-25 10:41:06.769829] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:12.574 [2024-11-25 10:41:06.769845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:12.574 [2024-11-25 10:41:06.769874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:12.574 [2024-11-25 10:41:06.769886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:12.574 [2024-11-25 10:41:06.769901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:12.574 [2024-11-25 10:41:06.769914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.574 [2024-11-25 10:41:06.769929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:12.574 [2024-11-25 10:41:06.769943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.180 ms 00:35:12.574 [2024-11-25 10:41:06.769958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.574 [2024-11-25 10:41:06.770017] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:35:12.574 [2024-11-25 10:41:06.770041] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:35:15.106 [2024-11-25 10:41:09.423798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.106 [2024-11-25 10:41:09.423880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:35:15.106 [2024-11-25 10:41:09.423918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2653.794 ms 00:35:15.106 [2024-11-25 10:41:09.423933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.364 [2024-11-25 10:41:09.461126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.364 [2024-11-25 10:41:09.461202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:15.364 [2024-11-25 10:41:09.461224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.936 ms 00:35:15.364 [2024-11-25 10:41:09.461240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.364 [2024-11-25 10:41:09.461377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.364 [2024-11-25 10:41:09.461402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:15.364 [2024-11-25 10:41:09.461416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:35:15.364 [2024-11-25 10:41:09.461436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.364 [2024-11-25 10:41:09.502773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.364 [2024-11-25 10:41:09.502859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:15.364 [2024-11-25 10:41:09.502880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.269 ms 00:35:15.364 [2024-11-25 10:41:09.502896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.364 [2024-11-25 10:41:09.502979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.364 [2024-11-25 10:41:09.503005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:15.364 [2024-11-25 10:41:09.503019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:15.364 [2024-11-25 10:41:09.503047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.364 [2024-11-25 10:41:09.503669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.364 [2024-11-25 10:41:09.503715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:15.365 [2024-11-25 10:41:09.503731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:35:15.365 [2024-11-25 10:41:09.503745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.365 [2024-11-25 10:41:09.503838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.365 [2024-11-25 10:41:09.503860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:15.365 [2024-11-25 10:41:09.503876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:35:15.365 [2024-11-25 10:41:09.503893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.365 [2024-11-25 10:41:09.523352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.365 [2024-11-25 10:41:09.523401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:15.365 [2024-11-25 10:41:09.523435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.432 ms 00:35:15.365 [2024-11-25 10:41:09.523449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.365 [2024-11-25 10:41:09.536536] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:15.365 [2024-11-25 10:41:09.538040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.365 [2024-11-25 10:41:09.538074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:15.365 [2024-11-25 10:41:09.538094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.482 ms 00:35:15.365 [2024-11-25 10:41:09.538107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.365 [2024-11-25 10:41:09.570943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.365 [2024-11-25 10:41:09.570991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:35:15.365 [2024-11-25 10:41:09.571029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.797 ms 00:35:15.365 [2024-11-25 10:41:09.571042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.365 [2024-11-25 10:41:09.571169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.365 [2024-11-25 10:41:09.571191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:15.365 [2024-11-25 10:41:09.571210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:35:15.365 [2024-11-25 10:41:09.571222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.365 [2024-11-25 10:41:09.597602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.365 [2024-11-25 10:41:09.597834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:35:15.365 [2024-11-25 10:41:09.597971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.301 ms 00:35:15.365 [2024-11-25 10:41:09.598100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.365 [2024-11-25 10:41:09.624091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.365 [2024-11-25 10:41:09.624297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:35:15.365 [2024-11-25 10:41:09.624432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.892 ms 00:35:15.365 [2024-11-25 10:41:09.624481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.365 [2024-11-25 10:41:09.625444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.365 [2024-11-25 10:41:09.625480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:15.365 [2024-11-25 10:41:09.625516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.740 ms 00:35:15.365 [2024-11-25 10:41:09.625528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.624 [2024-11-25 10:41:09.701854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.624 [2024-11-25 10:41:09.701906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:35:15.624 [2024-11-25 10:41:09.701947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.255 ms 00:35:15.624 [2024-11-25 10:41:09.701960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.624 [2024-11-25 10:41:09.730654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.624 [2024-11-25 10:41:09.730700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:35:15.624 [2024-11-25 10:41:09.730749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.611 ms 00:35:15.624 [2024-11-25 10:41:09.730763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.624 [2024-11-25 10:41:09.756851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.624 [2024-11-25 10:41:09.756889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:35:15.624 [2024-11-25 10:41:09.756924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.019 ms 00:35:15.624 [2024-11-25 10:41:09.756935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.624 [2024-11-25 10:41:09.783292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.624 [2024-11-25 10:41:09.783496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:35:15.624 [2024-11-25 10:41:09.783546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.323 ms 00:35:15.624 [2024-11-25 10:41:09.783560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.624 [2024-11-25 10:41:09.783602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.624 [2024-11-25 10:41:09.783618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:15.624 [2024-11-25 10:41:09.783637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:15.624 [2024-11-25 10:41:09.783649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.624 [2024-11-25 10:41:09.783791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.624 [2024-11-25 10:41:09.783863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:15.624 [2024-11-25 10:41:09.783889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:35:15.624 [2024-11-25 10:41:09.783901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.624 [2024-11-25 10:41:09.785393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3037.148 ms, result 0 00:35:15.624 { 00:35:15.624 "name": "ftl", 00:35:15.624 "uuid": "4be8c413-8bff-44f0-99f1-add31d9d010b" 00:35:15.624 } 00:35:15.624 10:41:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:35:15.882 [2024-11-25 10:41:10.052147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.882 10:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:35:16.141 10:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:35:16.400 [2024-11-25 10:41:10.584800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:16.400 10:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:35:16.659 [2024-11-25 10:41:10.823473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:16.659 10:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:17.227 Fill FTL, iteration 1 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83962 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83962 /var/tmp/spdk.tgt.sock 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83962 ']' 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.227 10:41:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:17.227 [2024-11-25 10:41:11.365224] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:35:17.227 [2024-11-25 10:41:11.365389] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83962 ] 00:35:17.227 [2024-11-25 10:41:11.543949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.487 [2024-11-25 10:41:11.696579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.493 10:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.493 10:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:18.493 10:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:35:18.753 ftln1 00:35:18.753 10:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:35:18.753 10:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83962 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83962 ']' 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83962 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83962 00:35:19.012 killing process with pid 83962 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83962' 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83962 00:35:19.012 10:41:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83962 00:35:20.915 10:41:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:35:20.915 10:41:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:20.915 [2024-11-25 10:41:15.233391] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:35:20.915 [2024-11-25 10:41:15.233906] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84015 ] 00:35:21.174 [2024-11-25 10:41:15.419306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.433 [2024-11-25 10:41:15.538908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.811  [2024-11-25T10:41:18.080Z] Copying: 209/1024 [MB] (209 MBps) [2024-11-25T10:41:19.015Z] Copying: 414/1024 [MB] (205 MBps) [2024-11-25T10:41:20.392Z] Copying: 625/1024 [MB] (211 MBps) [2024-11-25T10:41:20.960Z] Copying: 837/1024 [MB] (212 MBps) [2024-11-25T10:41:21.922Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:35:27.589 00:35:27.589 Calculate MD5 checksum, iteration 1 00:35:27.589 10:41:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:35:27.589 10:41:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:35:27.589 10:41:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:27.589 10:41:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:27.589 10:41:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:27.589 10:41:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:27.589 10:41:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:27.589 10:41:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:27.849 [2024-11-25 10:41:21.958339] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:35:27.849 [2024-11-25 10:41:21.958840] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84082 ] 00:35:27.849 [2024-11-25 10:41:22.146288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.108 [2024-11-25 10:41:22.277188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.484  [2024-11-25T10:41:24.754Z] Copying: 457/1024 [MB] (457 MBps) [2024-11-25T10:41:25.014Z] Copying: 900/1024 [MB] (443 MBps) [2024-11-25T10:41:25.952Z] Copying: 1024/1024 [MB] (average 445 MBps) 00:35:31.619 00:35:31.619 10:41:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:35:31.619 10:41:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:34.153 Fill FTL, iteration 2 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c54e06a2e1b7a3c8bc6a4d75588fc42f 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:34.153 10:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:34.153 [2024-11-25 10:41:28.091475] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:35:34.153 [2024-11-25 10:41:28.091931] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84147 ] 00:35:34.153 [2024-11-25 10:41:28.282533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.153 [2024-11-25 10:41:28.430728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.062  [2024-11-25T10:41:30.963Z] Copying: 222/1024 [MB] (222 MBps) [2024-11-25T10:41:31.900Z] Copying: 441/1024 [MB] (219 MBps) [2024-11-25T10:41:33.277Z] Copying: 663/1024 [MB] (222 MBps) [2024-11-25T10:41:33.536Z] Copying: 886/1024 [MB] (223 MBps) [2024-11-25T10:41:34.916Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:35:40.583 00:35:40.583 10:41:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:35:40.583 Calculate MD5 checksum, iteration 2 00:35:40.583 10:41:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:35:40.583 10:41:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:40.583 10:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:40.583 10:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:40.583 10:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:40.583 10:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:40.583 10:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:40.583 [2024-11-25 10:41:34.681907] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:35:40.583 [2024-11-25 10:41:34.682288] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84217 ] 00:35:40.583 [2024-11-25 10:41:34.865299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.847 [2024-11-25 10:41:34.989757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:42.751  [2024-11-25T10:41:37.652Z] Copying: 518/1024 [MB] (518 MBps) [2024-11-25T10:41:40.180Z] Copying: 1024/1024 [MB] (average 524 MBps) 00:35:45.847 00:35:45.847 10:41:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:35:45.847 10:41:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:47.749 10:41:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:47.749 10:41:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=550d22f7c0129a4a87b02670bd094425 00:35:47.749 10:41:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:47.750 10:41:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:47.750 10:41:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:47.750 [2024-11-25 10:41:42.008441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:47.750 [2024-11-25 10:41:42.008497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:47.750 [2024-11-25 10:41:42.008534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:47.750 [2024-11-25 10:41:42.008547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:47.750 [2024-11-25 10:41:42.008583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:47.750 [2024-11-25 10:41:42.008600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:47.750 [2024-11-25 10:41:42.008613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:47.750 [2024-11-25 10:41:42.008631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:47.750 [2024-11-25 10:41:42.008660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:47.750 [2024-11-25 10:41:42.008674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:47.750 [2024-11-25 10:41:42.008687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:47.750 [2024-11-25 10:41:42.008698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:47.750 [2024-11-25 10:41:42.008803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.339 ms, result 0 00:35:47.750 true 00:35:47.750 10:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:48.009 { 00:35:48.009 "name": "ftl", 00:35:48.009 "properties": [ 00:35:48.009 { 00:35:48.009 "name": "superblock_version", 00:35:48.009 "value": 5, 00:35:48.009 "read-only": true 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "name": "base_device", 00:35:48.009 "bands": [ 00:35:48.009 { 00:35:48.009 "id": 0, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 1, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 2, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 3, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 4, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 5, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 6, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 7, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 8, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 9, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 10, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 11, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 12, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 13, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 14, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 15, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 16, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 17, 00:35:48.009 "state": "FREE", 00:35:48.009 "validity": 0.0 00:35:48.009 } 00:35:48.009 ], 00:35:48.009 "read-only": true 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "name": "cache_device", 00:35:48.009 "type": "bdev", 00:35:48.009 "chunks": [ 00:35:48.009 { 00:35:48.009 "id": 0, 00:35:48.009 "state": "INACTIVE", 00:35:48.009 "utilization": 0.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 1, 00:35:48.009 "state": "CLOSED", 00:35:48.009 "utilization": 1.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 2, 00:35:48.009 "state": "CLOSED", 00:35:48.009 "utilization": 1.0 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 3, 00:35:48.009 "state": "OPEN", 00:35:48.009 "utilization": 0.001953125 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "id": 4, 00:35:48.009 "state": "OPEN", 00:35:48.009 "utilization": 0.0 00:35:48.009 } 00:35:48.009 ], 00:35:48.009 "read-only": true 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "name": "verbose_mode", 00:35:48.009 "value": true, 00:35:48.009 "unit": "", 00:35:48.009 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:48.009 }, 00:35:48.009 { 00:35:48.009 "name": "prep_upgrade_on_shutdown", 00:35:48.009 "value": false, 00:35:48.009 "unit": "", 00:35:48.009 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:48.009 } 00:35:48.009 ] 00:35:48.009 } 00:35:48.009 10:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:35:48.268 [2024-11-25 10:41:42.553082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:48.268 [2024-11-25 10:41:42.553133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:48.268 [2024-11-25 10:41:42.553168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:35:48.268 [2024-11-25 10:41:42.553180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:48.268 [2024-11-25 10:41:42.553212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:48.268 [2024-11-25 10:41:42.553228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:48.268 [2024-11-25 10:41:42.553240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:48.268 [2024-11-25 10:41:42.553251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:48.268 [2024-11-25 10:41:42.553276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:48.268 [2024-11-25 10:41:42.553290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:48.268 [2024-11-25 10:41:42.553301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:48.268 [2024-11-25 10:41:42.553312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:48.268 [2024-11-25 10:41:42.553383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.286 ms, result 0 00:35:48.268 true 00:35:48.268 10:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:48.268 10:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:35:48.268 10:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:48.836 10:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:35:48.836 10:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:35:48.836 10:41:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:48.836 [2024-11-25 10:41:43.152812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:48.836 [2024-11-25 10:41:43.152898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:48.836 [2024-11-25 10:41:43.152936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:48.836 [2024-11-25 10:41:43.152948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:48.836 [2024-11-25 10:41:43.152984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:48.836 [2024-11-25 10:41:43.153001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:48.836 [2024-11-25 10:41:43.153014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:48.836 [2024-11-25 10:41:43.153026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:48.836 [2024-11-25 10:41:43.153053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:48.836 [2024-11-25 10:41:43.153067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:48.836 [2024-11-25 10:41:43.153080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:48.836 [2024-11-25 10:41:43.153091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:48.836 [2024-11-25 10:41:43.153181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.352 ms, result 0 00:35:48.836 true 00:35:49.095 10:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:49.095 { 00:35:49.095 "name": "ftl", 00:35:49.095 "properties": [ 00:35:49.095 { 00:35:49.095 "name": "superblock_version", 00:35:49.095 "value": 5, 00:35:49.095 "read-only": true 00:35:49.095 }, 00:35:49.095 { 00:35:49.095 "name": "base_device", 00:35:49.095 "bands": [ 00:35:49.095 { 00:35:49.095 "id": 0, 00:35:49.095 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 1, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 2, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 3, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 4, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 5, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 6, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 7, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 8, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 9, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 10, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 11, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 12, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 13, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 14, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 15, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 16, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 17, 00:35:49.096 "state": "FREE", 00:35:49.096 "validity": 0.0 00:35:49.096 } 00:35:49.096 ], 00:35:49.096 "read-only": true 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "name": "cache_device", 00:35:49.096 "type": "bdev", 00:35:49.096 "chunks": [ 00:35:49.096 { 00:35:49.096 "id": 0, 00:35:49.096 "state": "INACTIVE", 00:35:49.096 "utilization": 0.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 1, 00:35:49.096 "state": "CLOSED", 00:35:49.096 "utilization": 1.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 2, 00:35:49.096 "state": "CLOSED", 00:35:49.096 "utilization": 1.0 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 3, 00:35:49.096 "state": "OPEN", 00:35:49.096 "utilization": 0.001953125 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "id": 4, 00:35:49.096 "state": "OPEN", 00:35:49.096 "utilization": 0.0 00:35:49.096 } 00:35:49.096 ], 00:35:49.096 "read-only": true 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "name": "verbose_mode", 00:35:49.096 "value": true, 00:35:49.096 "unit": "", 00:35:49.096 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:49.096 }, 00:35:49.096 { 00:35:49.096 "name": "prep_upgrade_on_shutdown", 00:35:49.096 "value": true, 00:35:49.096 "unit": "", 00:35:49.096 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:49.096 } 00:35:49.096 ] 00:35:49.096 } 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83834 ]] 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83834 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83834 ']' 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83834 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83834 00:35:49.355 killing process with pid 83834 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83834' 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83834 00:35:49.355 10:41:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83834 00:35:50.292 [2024-11-25 10:41:44.343798] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:35:50.292 [2024-11-25 10:41:44.360254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:50.292 [2024-11-25 10:41:44.360296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:35:50.292 [2024-11-25 10:41:44.360315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:50.292 [2024-11-25 10:41:44.360326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:50.292 [2024-11-25 10:41:44.360353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:35:50.292 [2024-11-25 10:41:44.363673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:50.292 [2024-11-25 10:41:44.363908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:35:50.292 [2024-11-25 10:41:44.363935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.302 ms 00:35:50.293 [2024-11-25 10:41:44.363947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.789357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.789626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:00.295 [2024-11-25 10:41:52.789656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8425.415 ms 00:36:00.295 [2024-11-25 10:41:52.789669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.790884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.790911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:00.295 [2024-11-25 10:41:52.790926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.179 ms 00:36:00.295 [2024-11-25 10:41:52.790952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.792085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.792129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:00.295 [2024-11-25 10:41:52.792160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.094 ms 00:36:00.295 [2024-11-25 10:41:52.792170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.803239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.803411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:00.295 [2024-11-25 10:41:52.803436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.970 ms 00:36:00.295 [2024-11-25 10:41:52.803448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.810709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.810959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:00.295 [2024-11-25 10:41:52.811002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.218 ms 00:36:00.295 [2024-11-25 10:41:52.811014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.811129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.811163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:00.295 [2024-11-25 10:41:52.811177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:36:00.295 [2024-11-25 10:41:52.811196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.821292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.821325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:36:00.295 [2024-11-25 10:41:52.821340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.067 ms 00:36:00.295 [2024-11-25 10:41:52.821349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.831806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.831851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:36:00.295 [2024-11-25 10:41:52.831882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.422 ms 00:36:00.295 [2024-11-25 10:41:52.831892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.841884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.841919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:00.295 [2024-11-25 10:41:52.841933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.955 ms 00:36:00.295 [2024-11-25 10:41:52.841943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.852662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.852700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:00.295 [2024-11-25 10:41:52.852732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.653 ms 00:36:00.295 [2024-11-25 10:41:52.852742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.852813] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:00.295 [2024-11-25 10:41:52.852837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:00.295 [2024-11-25 10:41:52.852851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:00.295 [2024-11-25 10:41:52.852876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:00.295 [2024-11-25 10:41:52.852903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.852915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.852926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.852937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.852949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.852959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.852971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.852982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.852993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.853004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.853014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.853025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.853035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.853046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.853056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:00.295 [2024-11-25 10:41:52.853069] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:00.295 [2024-11-25 10:41:52.853080] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4be8c413-8bff-44f0-99f1-add31d9d010b 00:36:00.295 [2024-11-25 10:41:52.853091] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:00.295 [2024-11-25 10:41:52.853101] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:36:00.295 [2024-11-25 10:41:52.853110] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:36:00.295 [2024-11-25 10:41:52.853121] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:36:00.295 [2024-11-25 10:41:52.853146] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:00.295 [2024-11-25 10:41:52.853173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:00.295 [2024-11-25 10:41:52.853189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:00.295 [2024-11-25 10:41:52.853199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:00.295 [2024-11-25 10:41:52.853223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:00.295 [2024-11-25 10:41:52.853233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.853244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:00.295 [2024-11-25 10:41:52.853260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:36:00.295 [2024-11-25 10:41:52.853278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.869996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.870036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:00.295 [2024-11-25 10:41:52.870070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.692 ms 00:36:00.295 [2024-11-25 10:41:52.870081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.870524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.295 [2024-11-25 10:41:52.870539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:00.295 [2024-11-25 10:41:52.870551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:36:00.295 [2024-11-25 10:41:52.870561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.917681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.295 [2024-11-25 10:41:52.917724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:00.295 [2024-11-25 10:41:52.917740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.295 [2024-11-25 10:41:52.917756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.917822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.295 [2024-11-25 10:41:52.917839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:00.295 [2024-11-25 10:41:52.917850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.295 [2024-11-25 10:41:52.917861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.917968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.295 [2024-11-25 10:41:52.918010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:00.295 [2024-11-25 10:41:52.918062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.295 [2024-11-25 10:41:52.918083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.295 [2024-11-25 10:41:52.918136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.296 [2024-11-25 10:41:52.918164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:00.296 [2024-11-25 10:41:52.918178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.296 [2024-11-25 10:41:52.918189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.296 [2024-11-25 10:41:53.003695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.296 [2024-11-25 10:41:53.003994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:00.296 [2024-11-25 10:41:53.004024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.296 [2024-11-25 10:41:53.004037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.296 [2024-11-25 10:41:53.072900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.296 [2024-11-25 10:41:53.072947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:00.296 [2024-11-25 10:41:53.072965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.296 [2024-11-25 10:41:53.072975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.296 [2024-11-25 10:41:53.073090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.296 [2024-11-25 10:41:53.073107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:00.296 [2024-11-25 10:41:53.073118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.296 [2024-11-25 10:41:53.073128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.296 [2024-11-25 10:41:53.073182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.296 [2024-11-25 10:41:53.073204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:00.296 [2024-11-25 10:41:53.073216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.296 [2024-11-25 10:41:53.073226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.296 [2024-11-25 10:41:53.073332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.296 [2024-11-25 10:41:53.073349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:00.296 [2024-11-25 10:41:53.073361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.296 [2024-11-25 10:41:53.073371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.296 [2024-11-25 10:41:53.073418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.296 [2024-11-25 10:41:53.073434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:00.296 [2024-11-25 10:41:53.073451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.296 [2024-11-25 10:41:53.073461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.296 [2024-11-25 10:41:53.073506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.296 [2024-11-25 10:41:53.073520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:00.296 [2024-11-25 10:41:53.073530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.296 [2024-11-25 10:41:53.073540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.296 [2024-11-25 10:41:53.073590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.296 [2024-11-25 10:41:53.073609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:00.296 [2024-11-25 10:41:53.073620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.296 [2024-11-25 10:41:53.073630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.296 [2024-11-25 10:41:53.073764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8713.526 ms, result 0 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84453 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84453 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84453 ']' 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.200 10:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:02.200 [2024-11-25 10:41:56.161016] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:36:02.200 [2024-11-25 10:41:56.161217] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84453 ] 00:36:02.200 [2024-11-25 10:41:56.343688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.200 [2024-11-25 10:41:56.468672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.137 [2024-11-25 10:41:57.313186] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:03.137 [2024-11-25 10:41:57.313278] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:03.137 [2024-11-25 10:41:57.464040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.137 [2024-11-25 10:41:57.464082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:03.137 [2024-11-25 10:41:57.464118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:03.137 [2024-11-25 10:41:57.464128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.137 [2024-11-25 10:41:57.464189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.137 [2024-11-25 10:41:57.464206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:03.137 [2024-11-25 10:41:57.464217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:36:03.137 [2024-11-25 10:41:57.464227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.137 [2024-11-25 10:41:57.464263] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:03.137 [2024-11-25 10:41:57.465337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:03.137 [2024-11-25 10:41:57.465593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.137 [2024-11-25 10:41:57.465709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:03.137 [2024-11-25 10:41:57.465883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.342 ms 00:36:03.137 [2024-11-25 10:41:57.465949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.137 [2024-11-25 10:41:57.468408] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:03.396 [2024-11-25 10:41:57.484301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.396 [2024-11-25 10:41:57.484338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:03.396 [2024-11-25 10:41:57.484376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.900 ms 00:36:03.396 [2024-11-25 10:41:57.484387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.396 [2024-11-25 10:41:57.484474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.396 [2024-11-25 10:41:57.484492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:03.396 [2024-11-25 10:41:57.484504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:36:03.396 [2024-11-25 10:41:57.484513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.396 [2024-11-25 10:41:57.494100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.396 [2024-11-25 10:41:57.494151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:03.396 [2024-11-25 10:41:57.494181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.497 ms 00:36:03.396 [2024-11-25 10:41:57.494192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.396 [2024-11-25 10:41:57.494272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.396 [2024-11-25 10:41:57.494290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:03.396 [2024-11-25 10:41:57.494302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:36:03.396 [2024-11-25 10:41:57.494312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.396 [2024-11-25 10:41:57.494366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.396 [2024-11-25 10:41:57.494383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:03.396 [2024-11-25 10:41:57.494399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:36:03.396 [2024-11-25 10:41:57.494410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.396 [2024-11-25 10:41:57.494460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:03.396 [2024-11-25 10:41:57.499225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.396 [2024-11-25 10:41:57.499258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:03.396 [2024-11-25 10:41:57.499289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.773 ms 00:36:03.396 [2024-11-25 10:41:57.499306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.396 [2024-11-25 10:41:57.499358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.396 [2024-11-25 10:41:57.499373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:03.396 [2024-11-25 10:41:57.499385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:03.396 [2024-11-25 10:41:57.499395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.396 [2024-11-25 10:41:57.499456] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:03.396 [2024-11-25 10:41:57.499486] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:03.396 [2024-11-25 10:41:57.499529] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:03.396 [2024-11-25 10:41:57.499547] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:36:03.396 [2024-11-25 10:41:57.499665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:03.396 [2024-11-25 10:41:57.499680] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:03.396 [2024-11-25 10:41:57.499693] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:03.396 [2024-11-25 10:41:57.499706] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:03.396 [2024-11-25 10:41:57.499718] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:03.396 [2024-11-25 10:41:57.499734] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:03.396 [2024-11-25 10:41:57.499744] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:03.396 [2024-11-25 10:41:57.499754] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:03.396 [2024-11-25 10:41:57.499764] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:03.396 [2024-11-25 10:41:57.499796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.396 [2024-11-25 10:41:57.499807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:03.396 [2024-11-25 10:41:57.499817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:36:03.396 [2024-11-25 10:41:57.499851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.396 [2024-11-25 10:41:57.499960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.396 [2024-11-25 10:41:57.499975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:03.396 [2024-11-25 10:41:57.499986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:36:03.396 [2024-11-25 10:41:57.500002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.396 [2024-11-25 10:41:57.500104] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:03.396 [2024-11-25 10:41:57.500120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:03.396 [2024-11-25 10:41:57.500131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:03.396 [2024-11-25 10:41:57.500142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.396 [2024-11-25 10:41:57.500152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:03.397 [2024-11-25 10:41:57.500161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:03.397 [2024-11-25 10:41:57.500180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:03.397 [2024-11-25 10:41:57.500192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:03.397 [2024-11-25 10:41:57.500217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:03.397 [2024-11-25 10:41:57.500236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:03.397 [2024-11-25 10:41:57.500245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:03.397 [2024-11-25 10:41:57.500270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:03.397 [2024-11-25 10:41:57.500280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:03.397 [2024-11-25 10:41:57.500300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:03.397 [2024-11-25 10:41:57.500309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:03.397 [2024-11-25 10:41:57.500328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:03.397 [2024-11-25 10:41:57.500337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:03.397 [2024-11-25 10:41:57.500346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:03.397 [2024-11-25 10:41:57.500356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:03.397 [2024-11-25 10:41:57.500371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:03.397 [2024-11-25 10:41:57.500393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:03.397 [2024-11-25 10:41:57.500403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:03.397 [2024-11-25 10:41:57.500412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:03.397 [2024-11-25 10:41:57.500436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:03.397 [2024-11-25 10:41:57.500445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:03.397 [2024-11-25 10:41:57.500460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:03.397 [2024-11-25 10:41:57.500469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:03.397 [2024-11-25 10:41:57.500479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:03.397 [2024-11-25 10:41:57.500488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:03.397 [2024-11-25 10:41:57.500506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:03.397 [2024-11-25 10:41:57.500516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:03.397 [2024-11-25 10:41:57.500535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:03.397 [2024-11-25 10:41:57.500562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:03.397 [2024-11-25 10:41:57.500571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500580] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:03.397 [2024-11-25 10:41:57.500592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:03.397 [2024-11-25 10:41:57.500602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:03.397 [2024-11-25 10:41:57.500617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:03.397 [2024-11-25 10:41:57.500633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:03.397 [2024-11-25 10:41:57.500644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:03.397 [2024-11-25 10:41:57.500654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:03.397 [2024-11-25 10:41:57.500663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:03.397 [2024-11-25 10:41:57.500672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:03.397 [2024-11-25 10:41:57.500683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:03.397 [2024-11-25 10:41:57.500694] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:03.397 [2024-11-25 10:41:57.500708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:03.397 [2024-11-25 10:41:57.500730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:03.397 [2024-11-25 10:41:57.500760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:03.397 [2024-11-25 10:41:57.500771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:03.397 [2024-11-25 10:41:57.500781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:03.397 [2024-11-25 10:41:57.500812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:03.397 [2024-11-25 10:41:57.500901] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:03.397 [2024-11-25 10:41:57.500913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:03.397 [2024-11-25 10:41:57.500937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:03.397 [2024-11-25 10:41:57.500947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:03.397 [2024-11-25 10:41:57.500957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:03.397 [2024-11-25 10:41:57.500969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:03.397 [2024-11-25 10:41:57.500979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:03.397 [2024-11-25 10:41:57.500990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.923 ms 00:36:03.397 [2024-11-25 10:41:57.501006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:03.397 [2024-11-25 10:41:57.501064] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:36:03.397 [2024-11-25 10:41:57.501081] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:36:05.928 [2024-11-25 10:42:00.188017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:05.928 [2024-11-25 10:42:00.188366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:36:05.928 [2024-11-25 10:42:00.188489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2686.966 ms 00:36:05.928 [2024-11-25 10:42:00.188640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:05.928 [2024-11-25 10:42:00.223461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:05.928 [2024-11-25 10:42:00.223735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:05.928 [2024-11-25 10:42:00.223909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.511 ms 00:36:05.928 [2024-11-25 10:42:00.224074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:05.928 [2024-11-25 10:42:00.224284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:05.928 [2024-11-25 10:42:00.224350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:05.928 [2024-11-25 10:42:00.224459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:36:05.928 [2024-11-25 10:42:00.224506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.187 [2024-11-25 10:42:00.265761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.187 [2024-11-25 10:42:00.266065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:06.187 [2024-11-25 10:42:00.266222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.025 ms 00:36:06.187 [2024-11-25 10:42:00.266368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.187 [2024-11-25 10:42:00.266491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.187 [2024-11-25 10:42:00.266666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:06.187 [2024-11-25 10:42:00.266788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:06.187 [2024-11-25 10:42:00.266845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.187 [2024-11-25 10:42:00.267848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.187 [2024-11-25 10:42:00.268013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:06.187 [2024-11-25 10:42:00.268135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.607 ms 00:36:06.187 [2024-11-25 10:42:00.268233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.187 [2024-11-25 10:42:00.268372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.187 [2024-11-25 10:42:00.268442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:06.187 [2024-11-25 10:42:00.268533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:36:06.187 [2024-11-25 10:42:00.268577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.187 [2024-11-25 10:42:00.287971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.187 [2024-11-25 10:42:00.288156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:06.187 [2024-11-25 10:42:00.288269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.334 ms 00:36:06.188 [2024-11-25 10:42:00.288315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.303627] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:36:06.188 [2024-11-25 10:42:00.303818] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:06.188 [2024-11-25 10:42:00.303961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.304004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:36:06.188 [2024-11-25 10:42:00.304130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.459 ms 00:36:06.188 [2024-11-25 10:42:00.304176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.319571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.319812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:36:06.188 [2024-11-25 10:42:00.319863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.321 ms 00:36:06.188 [2024-11-25 10:42:00.319879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.334095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.334134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:36:06.188 [2024-11-25 10:42:00.334166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.165 ms 00:36:06.188 [2024-11-25 10:42:00.334177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.347488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.347525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:36:06.188 [2024-11-25 10:42:00.347556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.252 ms 00:36:06.188 [2024-11-25 10:42:00.347567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.348453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.348494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:06.188 [2024-11-25 10:42:00.348525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.758 ms 00:36:06.188 [2024-11-25 10:42:00.348537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.431200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.431554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:06.188 [2024-11-25 10:42:00.431680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.633 ms 00:36:06.188 [2024-11-25 10:42:00.431705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.443284] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:06.188 [2024-11-25 10:42:00.444343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.444379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:06.188 [2024-11-25 10:42:00.444413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.502 ms 00:36:06.188 [2024-11-25 10:42:00.444425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.444573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.444604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:36:06.188 [2024-11-25 10:42:00.444619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:06.188 [2024-11-25 10:42:00.444634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.444719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.444738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:06.188 [2024-11-25 10:42:00.444750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:36:06.188 [2024-11-25 10:42:00.444787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.444846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.444874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:06.188 [2024-11-25 10:42:00.444888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:06.188 [2024-11-25 10:42:00.444927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.444976] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:06.188 [2024-11-25 10:42:00.444994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.445006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:06.188 [2024-11-25 10:42:00.445018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:36:06.188 [2024-11-25 10:42:00.445030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.473375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.473425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:36:06.188 [2024-11-25 10:42:00.473459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.296 ms 00:36:06.188 [2024-11-25 10:42:00.473480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.473575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.188 [2024-11-25 10:42:00.473593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:06.188 [2024-11-25 10:42:00.473606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:36:06.188 [2024-11-25 10:42:00.473617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.188 [2024-11-25 10:42:00.475260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3010.545 ms, result 0 00:36:06.188 [2024-11-25 10:42:00.489830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.188 [2024-11-25 10:42:00.505841] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:06.188 [2024-11-25 10:42:00.514454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:06.447 10:42:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.447 10:42:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:06.447 10:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:06.447 10:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:06.447 10:42:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:36:06.706 [2024-11-25 10:42:00.818530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.706 [2024-11-25 10:42:00.818800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:06.706 [2024-11-25 10:42:00.818928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:06.706 [2024-11-25 10:42:00.818983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.706 [2024-11-25 10:42:00.819171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.706 [2024-11-25 10:42:00.819219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:06.706 [2024-11-25 10:42:00.819261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:06.706 [2024-11-25 10:42:00.819298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.706 [2024-11-25 10:42:00.819361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:06.706 [2024-11-25 10:42:00.819504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:06.706 [2024-11-25 10:42:00.819554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:36:06.706 [2024-11-25 10:42:00.819590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:06.706 [2024-11-25 10:42:00.819705] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.155 ms, result 0 00:36:06.706 true 00:36:06.706 10:42:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:06.964 { 00:36:06.964 "name": "ftl", 00:36:06.964 "properties": [ 00:36:06.964 { 00:36:06.964 "name": "superblock_version", 00:36:06.964 "value": 5, 00:36:06.964 "read-only": true 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "name": "base_device", 00:36:06.964 "bands": [ 00:36:06.964 { 00:36:06.964 "id": 0, 00:36:06.964 "state": "CLOSED", 00:36:06.964 "validity": 1.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 1, 00:36:06.964 "state": "CLOSED", 00:36:06.964 "validity": 1.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 2, 00:36:06.964 "state": "CLOSED", 00:36:06.964 "validity": 0.007843137254901933 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 3, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 4, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 5, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 6, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 7, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 8, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 9, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 10, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 11, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 12, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 13, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 14, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 15, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 16, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 17, 00:36:06.964 "state": "FREE", 00:36:06.964 "validity": 0.0 00:36:06.964 } 00:36:06.964 ], 00:36:06.964 "read-only": true 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "name": "cache_device", 00:36:06.964 "type": "bdev", 00:36:06.964 "chunks": [ 00:36:06.964 { 00:36:06.964 "id": 0, 00:36:06.964 "state": "INACTIVE", 00:36:06.964 "utilization": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 1, 00:36:06.964 "state": "OPEN", 00:36:06.964 "utilization": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 2, 00:36:06.964 "state": "OPEN", 00:36:06.964 "utilization": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 3, 00:36:06.964 "state": "FREE", 00:36:06.964 "utilization": 0.0 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "id": 4, 00:36:06.964 "state": "FREE", 00:36:06.964 "utilization": 0.0 00:36:06.964 } 00:36:06.964 ], 00:36:06.964 "read-only": true 00:36:06.964 }, 00:36:06.964 { 00:36:06.964 "name": "verbose_mode", 00:36:06.964 "value": true, 00:36:06.964 "unit": "", 00:36:06.965 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:36:06.965 }, 00:36:06.965 { 00:36:06.965 "name": "prep_upgrade_on_shutdown", 00:36:06.965 "value": false, 00:36:06.965 "unit": "", 00:36:06.965 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:36:06.965 } 00:36:06.965 ] 00:36:06.965 } 00:36:06.965 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:36:06.965 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:36:06.965 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:07.223 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:36:07.223 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:36:07.224 Validate MD5 checksum, iteration 1 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:07.224 10:42:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:07.482 [2024-11-25 10:42:01.614186] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:36:07.482 [2024-11-25 10:42:01.614567] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84522 ] 00:36:07.482 [2024-11-25 10:42:01.791607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.745 [2024-11-25 10:42:01.936362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.650  [2024-11-25T10:42:04.625Z] Copying: 490/1024 [MB] (490 MBps) [2024-11-25T10:42:04.890Z] Copying: 975/1024 [MB] (485 MBps) [2024-11-25T10:42:06.265Z] Copying: 1024/1024 [MB] (average 488 MBps) 00:36:11.932 00:36:11.932 10:42:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:11.932 10:42:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:13.835 Validate MD5 checksum, iteration 2 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c54e06a2e1b7a3c8bc6a4d75588fc42f 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c54e06a2e1b7a3c8bc6a4d75588fc42f != \c\5\4\e\0\6\a\2\e\1\b\7\a\3\c\8\b\c\6\a\4\d\7\5\5\8\8\f\c\4\2\f ]] 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:13.835 10:42:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:13.835 [2024-11-25 10:42:08.087757] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:36:13.835 [2024-11-25 10:42:08.088189] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84589 ] 00:36:14.093 [2024-11-25 10:42:08.267478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.093 [2024-11-25 10:42:08.405337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.008  [2024-11-25T10:42:11.278Z] Copying: 488/1024 [MB] (488 MBps) [2024-11-25T10:42:11.278Z] Copying: 974/1024 [MB] (486 MBps) [2024-11-25T10:42:13.183Z] Copying: 1024/1024 [MB] (average 486 MBps) 00:36:18.850 00:36:18.850 10:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:18.850 10:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=550d22f7c0129a4a87b02670bd094425 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 550d22f7c0129a4a87b02670bd094425 != \5\5\0\d\2\2\f\7\c\0\1\2\9\a\4\a\8\7\b\0\2\6\7\0\b\d\0\9\4\4\2\5 ]] 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84453 ]] 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84453 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84662 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84662 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84662 ']' 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:20.767 10:42:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:20.767 [2024-11-25 10:42:14.914060] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:36:20.767 [2024-11-25 10:42:14.914443] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84662 ] 00:36:20.767 [2024-11-25 10:42:15.086095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84453 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:36:21.027 [2024-11-25 10:42:15.191074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.965 [2024-11-25 10:42:16.050139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:21.965 [2024-11-25 10:42:16.050409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:21.965 [2024-11-25 10:42:16.201507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.201562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:21.965 [2024-11-25 10:42:16.201598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:21.965 [2024-11-25 10:42:16.201609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.201673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.201691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:21.965 [2024-11-25 10:42:16.201702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:36:21.965 [2024-11-25 10:42:16.201712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.201750] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:21.965 [2024-11-25 10:42:16.202737] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:21.965 [2024-11-25 10:42:16.202807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.202839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:21.965 [2024-11-25 10:42:16.202851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.071 ms 00:36:21.965 [2024-11-25 10:42:16.202861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.203356] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:21.965 [2024-11-25 10:42:16.222720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.222761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:21.965 [2024-11-25 10:42:16.222824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.369 ms 00:36:21.965 [2024-11-25 10:42:16.222836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.232903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.233101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:21.965 [2024-11-25 10:42:16.233151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:36:21.965 [2024-11-25 10:42:16.233164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.233662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.233687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:21.965 [2024-11-25 10:42:16.233700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.395 ms 00:36:21.965 [2024-11-25 10:42:16.233711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.233789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.233811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:21.965 [2024-11-25 10:42:16.233844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:36:21.965 [2024-11-25 10:42:16.233870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.233906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.233921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:21.965 [2024-11-25 10:42:16.233933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:36:21.965 [2024-11-25 10:42:16.233943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.233974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:21.965 [2024-11-25 10:42:16.237436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.237468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:21.965 [2024-11-25 10:42:16.237498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.468 ms 00:36:21.965 [2024-11-25 10:42:16.237508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.237542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.237555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:21.965 [2024-11-25 10:42:16.237566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:21.965 [2024-11-25 10:42:16.237575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.237615] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:21.965 [2024-11-25 10:42:16.237642] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:21.965 [2024-11-25 10:42:16.237686] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:21.965 [2024-11-25 10:42:16.237708] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:36:21.965 [2024-11-25 10:42:16.237845] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:21.965 [2024-11-25 10:42:16.237864] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:21.965 [2024-11-25 10:42:16.237878] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:21.965 [2024-11-25 10:42:16.237891] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:21.965 [2024-11-25 10:42:16.237919] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:21.965 [2024-11-25 10:42:16.237930] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:21.965 [2024-11-25 10:42:16.237939] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:21.965 [2024-11-25 10:42:16.237949] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:21.965 [2024-11-25 10:42:16.237960] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:21.965 [2024-11-25 10:42:16.237971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.965 [2024-11-25 10:42:16.237987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:21.965 [2024-11-25 10:42:16.237998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.358 ms 00:36:21.965 [2024-11-25 10:42:16.238007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.965 [2024-11-25 10:42:16.238092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.966 [2024-11-25 10:42:16.238106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:21.966 [2024-11-25 10:42:16.238117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:36:21.966 [2024-11-25 10:42:16.238126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.966 [2024-11-25 10:42:16.238294] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:21.966 [2024-11-25 10:42:16.238310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:21.966 [2024-11-25 10:42:16.238331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:21.966 [2024-11-25 10:42:16.238343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:21.966 [2024-11-25 10:42:16.238364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:21.966 [2024-11-25 10:42:16.238385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:21.966 [2024-11-25 10:42:16.238395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:21.966 [2024-11-25 10:42:16.238404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:21.966 [2024-11-25 10:42:16.238424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:21.966 [2024-11-25 10:42:16.238434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:21.966 [2024-11-25 10:42:16.238461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:21.966 [2024-11-25 10:42:16.238471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:21.966 [2024-11-25 10:42:16.238492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:21.966 [2024-11-25 10:42:16.238502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:21.966 [2024-11-25 10:42:16.238521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:21.966 [2024-11-25 10:42:16.238531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:21.966 [2024-11-25 10:42:16.238541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:21.966 [2024-11-25 10:42:16.238563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:21.966 [2024-11-25 10:42:16.238573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:21.966 [2024-11-25 10:42:16.238583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:21.966 [2024-11-25 10:42:16.238593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:21.966 [2024-11-25 10:42:16.238632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:21.966 [2024-11-25 10:42:16.238643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:21.966 [2024-11-25 10:42:16.238653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:21.966 [2024-11-25 10:42:16.238663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:21.966 [2024-11-25 10:42:16.238673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:21.966 [2024-11-25 10:42:16.238683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:21.966 [2024-11-25 10:42:16.238693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:21.966 [2024-11-25 10:42:16.238713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:21.966 [2024-11-25 10:42:16.238723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:21.966 [2024-11-25 10:42:16.238743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:21.966 [2024-11-25 10:42:16.238773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:21.966 [2024-11-25 10:42:16.238783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238814] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:21.966 [2024-11-25 10:42:16.238826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:21.966 [2024-11-25 10:42:16.238837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:21.966 [2024-11-25 10:42:16.238849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:21.966 [2024-11-25 10:42:16.238861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:21.966 [2024-11-25 10:42:16.238872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:21.966 [2024-11-25 10:42:16.238882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:21.966 [2024-11-25 10:42:16.238892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:21.966 [2024-11-25 10:42:16.238903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:21.966 [2024-11-25 10:42:16.238913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:21.966 [2024-11-25 10:42:16.238925] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:21.966 [2024-11-25 10:42:16.238939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.238952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:21.966 [2024-11-25 10:42:16.238963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.238974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.238985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:21.966 [2024-11-25 10:42:16.238996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:21.966 [2024-11-25 10:42:16.239007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:21.966 [2024-11-25 10:42:16.239018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:21.966 [2024-11-25 10:42:16.239028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.239040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.239051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.239062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.239073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.239084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.239095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:21.966 [2024-11-25 10:42:16.239106] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:21.966 [2024-11-25 10:42:16.239118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.239131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:21.966 [2024-11-25 10:42:16.239142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:21.966 [2024-11-25 10:42:16.239153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:21.966 [2024-11-25 10:42:16.239164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:21.966 [2024-11-25 10:42:16.239176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.966 [2024-11-25 10:42:16.239194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:21.966 [2024-11-25 10:42:16.239206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.977 ms 00:36:21.966 [2024-11-25 10:42:16.239218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.966 [2024-11-25 10:42:16.271680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.966 [2024-11-25 10:42:16.271932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:21.966 [2024-11-25 10:42:16.271978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.379 ms 00:36:21.966 [2024-11-25 10:42:16.271993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.966 [2024-11-25 10:42:16.272056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.966 [2024-11-25 10:42:16.272072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:21.966 [2024-11-25 10:42:16.272084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:36:21.966 [2024-11-25 10:42:16.272095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.226 [2024-11-25 10:42:16.313716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.226 [2024-11-25 10:42:16.313764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:22.227 [2024-11-25 10:42:16.313826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.537 ms 00:36:22.227 [2024-11-25 10:42:16.313839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.313923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.313952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:22.227 [2024-11-25 10:42:16.313966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:36:22.227 [2024-11-25 10:42:16.313977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.314185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.314205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:22.227 [2024-11-25 10:42:16.314233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.100 ms 00:36:22.227 [2024-11-25 10:42:16.314245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.314304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.314327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:22.227 [2024-11-25 10:42:16.314341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:36:22.227 [2024-11-25 10:42:16.314352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.333845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.333885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:22.227 [2024-11-25 10:42:16.333917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.457 ms 00:36:22.227 [2024-11-25 10:42:16.333928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.334072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.334102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:36:22.227 [2024-11-25 10:42:16.334116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:22.227 [2024-11-25 10:42:16.334131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.362092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.362137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:36:22.227 [2024-11-25 10:42:16.362170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.933 ms 00:36:22.227 [2024-11-25 10:42:16.362187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.373302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.373340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:22.227 [2024-11-25 10:42:16.373380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.732 ms 00:36:22.227 [2024-11-25 10:42:16.373391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.444851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.444921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:22.227 [2024-11-25 10:42:16.444954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 71.390 ms 00:36:22.227 [2024-11-25 10:42:16.444966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.445186] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:36:22.227 [2024-11-25 10:42:16.445342] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:36:22.227 [2024-11-25 10:42:16.445471] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:36:22.227 [2024-11-25 10:42:16.445584] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:36:22.227 [2024-11-25 10:42:16.445597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.445607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:36:22.227 [2024-11-25 10:42:16.445619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 00:36:22.227 [2024-11-25 10:42:16.445629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.445739] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:36:22.227 [2024-11-25 10:42:16.445764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.445799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:36:22.227 [2024-11-25 10:42:16.445828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:36:22.227 [2024-11-25 10:42:16.445839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.462771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.462842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:36:22.227 [2024-11-25 10:42:16.462877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.881 ms 00:36:22.227 [2024-11-25 10:42:16.462888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.472413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.472449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:36:22.227 [2024-11-25 10:42:16.472480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:36:22.227 [2024-11-25 10:42:16.472490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.227 [2024-11-25 10:42:16.472596] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:36:22.227 [2024-11-25 10:42:16.472882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.227 [2024-11-25 10:42:16.472904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:22.227 [2024-11-25 10:42:16.472916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:36:22.227 [2024-11-25 10:42:16.472942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.796 [2024-11-25 10:42:17.062987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.796 [2024-11-25 10:42:17.063411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:22.796 [2024-11-25 10:42:17.063442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 588.947 ms 00:36:22.796 [2024-11-25 10:42:17.063456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.796 [2024-11-25 10:42:17.068435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.796 [2024-11-25 10:42:17.068478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:22.796 [2024-11-25 10:42:17.068511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.289 ms 00:36:22.796 [2024-11-25 10:42:17.068537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.796 [2024-11-25 10:42:17.069070] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:36:22.796 [2024-11-25 10:42:17.069104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.796 [2024-11-25 10:42:17.069119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:22.796 [2024-11-25 10:42:17.069147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:36:22.796 [2024-11-25 10:42:17.069159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.796 [2024-11-25 10:42:17.069240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.796 [2024-11-25 10:42:17.069258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:22.796 [2024-11-25 10:42:17.069270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:22.796 [2024-11-25 10:42:17.069281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:22.796 [2024-11-25 10:42:17.069352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 596.755 ms, result 0 00:36:22.796 [2024-11-25 10:42:17.069408] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:36:22.796 [2024-11-25 10:42:17.069487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:22.796 [2024-11-25 10:42:17.069501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:22.796 [2024-11-25 10:42:17.069512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:36:22.796 [2024-11-25 10:42:17.069523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.665019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.665284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:23.365 [2024-11-25 10:42:17.665410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 594.287 ms 00:36:23.365 [2024-11-25 10:42:17.665458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.670639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.670850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:23.365 [2024-11-25 10:42:17.670877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.332 ms 00:36:23.365 [2024-11-25 10:42:17.670891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.671264] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:36:23.365 [2024-11-25 10:42:17.671295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.671308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:23.365 [2024-11-25 10:42:17.671321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:36:23.365 [2024-11-25 10:42:17.671332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.671386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.671404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:23.365 [2024-11-25 10:42:17.671417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:36:23.365 [2024-11-25 10:42:17.671428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.671479] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 602.067 ms, result 0 00:36:23.365 [2024-11-25 10:42:17.671576] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:23.365 [2024-11-25 10:42:17.671593] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:23.365 [2024-11-25 10:42:17.671606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.671628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:36:23.365 [2024-11-25 10:42:17.671639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1199.040 ms 00:36:23.365 [2024-11-25 10:42:17.671650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.671691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.671707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:36:23.365 [2024-11-25 10:42:17.671727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:36:23.365 [2024-11-25 10:42:17.671738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.683772] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:23.365 [2024-11-25 10:42:17.684180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.684334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:23.365 [2024-11-25 10:42:17.684453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.420 ms 00:36:23.365 [2024-11-25 10:42:17.684569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.685334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.685485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:36:23.365 [2024-11-25 10:42:17.685636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:36:23.365 [2024-11-25 10:42:17.685680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.688138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.688310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:36:23.365 [2024-11-25 10:42:17.688338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.325 ms 00:36:23.365 [2024-11-25 10:42:17.688350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.688421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.688439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:36:23.365 [2024-11-25 10:42:17.688452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:23.365 [2024-11-25 10:42:17.688470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.688600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.688617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:23.365 [2024-11-25 10:42:17.688629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:36:23.365 [2024-11-25 10:42:17.688655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.688682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.688694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:23.365 [2024-11-25 10:42:17.688705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:36:23.365 [2024-11-25 10:42:17.688715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.688774] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:23.365 [2024-11-25 10:42:17.688831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.688844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:23.365 [2024-11-25 10:42:17.688855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:36:23.365 [2024-11-25 10:42:17.688882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.688947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:23.365 [2024-11-25 10:42:17.688962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:23.365 [2024-11-25 10:42:17.688973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:36:23.365 [2024-11-25 10:42:17.688983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:23.365 [2024-11-25 10:42:17.690246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1488.161 ms, result 0 00:36:23.625 [2024-11-25 10:42:17.704955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.625 [2024-11-25 10:42:17.720951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:23.625 [2024-11-25 10:42:17.730736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:23.625 Validate MD5 checksum, iteration 1 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:23.625 10:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:23.625 [2024-11-25 10:42:17.884379] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:36:23.625 [2024-11-25 10:42:17.884824] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84697 ] 00:36:23.885 [2024-11-25 10:42:18.068082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.885 [2024-11-25 10:42:18.193674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.788  [2024-11-25T10:42:21.057Z] Copying: 485/1024 [MB] (485 MBps) [2024-11-25T10:42:21.057Z] Copying: 966/1024 [MB] (481 MBps) [2024-11-25T10:42:22.432Z] Copying: 1024/1024 [MB] (average 482 MBps) 00:36:28.099 00:36:28.099 10:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:28.100 10:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:30.634 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:30.634 Validate MD5 checksum, iteration 2 00:36:30.634 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c54e06a2e1b7a3c8bc6a4d75588fc42f 00:36:30.634 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c54e06a2e1b7a3c8bc6a4d75588fc42f != \c\5\4\e\0\6\a\2\e\1\b\7\a\3\c\8\b\c\6\a\4\d\7\5\5\8\8\f\c\4\2\f ]] 00:36:30.634 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:30.634 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:30.634 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:30.634 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:30.635 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:30.635 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:30.635 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:30.635 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:30.635 10:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:30.635 [2024-11-25 10:42:24.504256] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:36:30.635 [2024-11-25 10:42:24.504683] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84760 ] 00:36:30.635 [2024-11-25 10:42:24.698158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.635 [2024-11-25 10:42:24.861470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:32.539  [2024-11-25T10:42:27.810Z] Copying: 410/1024 [MB] (410 MBps) [2024-11-25T10:42:28.068Z] Copying: 854/1024 [MB] (444 MBps) [2024-11-25T10:42:29.444Z] Copying: 1024/1024 [MB] (average 433 MBps) 00:36:35.111 00:36:35.111 10:42:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:35.111 10:42:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=550d22f7c0129a4a87b02670bd094425 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 550d22f7c0129a4a87b02670bd094425 != \5\5\0\d\2\2\f\7\c\0\1\2\9\a\4\a\8\7\b\0\2\6\7\0\b\d\0\9\4\4\2\5 ]] 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84662 ]] 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84662 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84662 ']' 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84662 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84662 00:36:37.643 killing process with pid 84662 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84662' 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84662 00:36:37.643 10:42:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84662 00:36:38.581 [2024-11-25 10:42:32.693065] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:36:38.581 [2024-11-25 10:42:32.712463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.712529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:36:38.581 [2024-11-25 10:42:32.712561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:38.581 [2024-11-25 10:42:32.712574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.712607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:36:38.581 [2024-11-25 10:42:32.716546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.716584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:36:38.581 [2024-11-25 10:42:32.716600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.917 ms 00:36:38.581 [2024-11-25 10:42:32.716620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.716980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.717000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:38.581 [2024-11-25 10:42:32.717011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:36:38.581 [2024-11-25 10:42:32.717022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.718404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.718447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:38.581 [2024-11-25 10:42:32.718464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.353 ms 00:36:38.581 [2024-11-25 10:42:32.718477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.719783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.720016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:38.581 [2024-11-25 10:42:32.720044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.254 ms 00:36:38.581 [2024-11-25 10:42:32.720068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.733985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.734030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:38.581 [2024-11-25 10:42:32.734049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.850 ms 00:36:38.581 [2024-11-25 10:42:32.734070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.741099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.741153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:38.581 [2024-11-25 10:42:32.741169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.980 ms 00:36:38.581 [2024-11-25 10:42:32.741193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.741322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.741343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:38.581 [2024-11-25 10:42:32.741365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.082 ms 00:36:38.581 [2024-11-25 10:42:32.741377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.754384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.754435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:36:38.581 [2024-11-25 10:42:32.754451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.974 ms 00:36:38.581 [2024-11-25 10:42:32.754463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.767022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.767192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:36:38.581 [2024-11-25 10:42:32.767221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.517 ms 00:36:38.581 [2024-11-25 10:42:32.767233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.779808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.779887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:38.581 [2024-11-25 10:42:32.779920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.514 ms 00:36:38.581 [2024-11-25 10:42:32.779936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.792484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.581 [2024-11-25 10:42:32.792690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:38.581 [2024-11-25 10:42:32.792718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.392 ms 00:36:38.581 [2024-11-25 10:42:32.792730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.581 [2024-11-25 10:42:32.792799] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:38.581 [2024-11-25 10:42:32.792826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:38.581 [2024-11-25 10:42:32.792841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:38.581 [2024-11-25 10:42:32.792854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:38.581 [2024-11-25 10:42:32.792866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:38.581 [2024-11-25 10:42:32.792878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.792890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.792902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.792914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.792926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.792948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.792970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.792982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.793000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.793011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.793023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.793035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.793057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.793080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:38.582 [2024-11-25 10:42:32.793094] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:38.582 [2024-11-25 10:42:32.793106] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 4be8c413-8bff-44f0-99f1-add31d9d010b 00:36:38.582 [2024-11-25 10:42:32.793118] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:38.582 [2024-11-25 10:42:32.793131] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:36:38.582 [2024-11-25 10:42:32.793142] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:36:38.582 [2024-11-25 10:42:32.793154] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:36:38.582 [2024-11-25 10:42:32.793166] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:38.582 [2024-11-25 10:42:32.793178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:38.582 [2024-11-25 10:42:32.793190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:38.582 [2024-11-25 10:42:32.793200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:38.582 [2024-11-25 10:42:32.793211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:38.582 [2024-11-25 10:42:32.793222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.582 [2024-11-25 10:42:32.793245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:38.582 [2024-11-25 10:42:32.793258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.425 ms 00:36:38.582 [2024-11-25 10:42:32.793270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.582 [2024-11-25 10:42:32.811369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.582 [2024-11-25 10:42:32.811598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:38.582 [2024-11-25 10:42:32.811626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.048 ms 00:36:38.582 [2024-11-25 10:42:32.811638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.582 [2024-11-25 10:42:32.812248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.582 [2024-11-25 10:42:32.812276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:38.582 [2024-11-25 10:42:32.812290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:36:38.582 [2024-11-25 10:42:32.812315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.582 [2024-11-25 10:42:32.872931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.582 [2024-11-25 10:42:32.872992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:38.582 [2024-11-25 10:42:32.873009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.582 [2024-11-25 10:42:32.873020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.582 [2024-11-25 10:42:32.873098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.582 [2024-11-25 10:42:32.873112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:38.582 [2024-11-25 10:42:32.873123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.582 [2024-11-25 10:42:32.873133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.582 [2024-11-25 10:42:32.873260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.582 [2024-11-25 10:42:32.873279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:38.582 [2024-11-25 10:42:32.873290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.582 [2024-11-25 10:42:32.873300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.582 [2024-11-25 10:42:32.873324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.582 [2024-11-25 10:42:32.873344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:38.582 [2024-11-25 10:42:32.873356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.582 [2024-11-25 10:42:32.873366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.841 [2024-11-25 10:42:32.990203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.841 [2024-11-25 10:42:32.990286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:38.841 [2024-11-25 10:42:32.990307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.841 [2024-11-25 10:42:32.990319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.841 [2024-11-25 10:42:33.081613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.841 [2024-11-25 10:42:33.081688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:38.841 [2024-11-25 10:42:33.081706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.841 [2024-11-25 10:42:33.081718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.842 [2024-11-25 10:42:33.081976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.842 [2024-11-25 10:42:33.081998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:38.842 [2024-11-25 10:42:33.082011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.842 [2024-11-25 10:42:33.082023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.842 [2024-11-25 10:42:33.082086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.842 [2024-11-25 10:42:33.082104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:38.842 [2024-11-25 10:42:33.082123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.842 [2024-11-25 10:42:33.082148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.842 [2024-11-25 10:42:33.082364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.842 [2024-11-25 10:42:33.082384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:38.842 [2024-11-25 10:42:33.082407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.842 [2024-11-25 10:42:33.082430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.842 [2024-11-25 10:42:33.082504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.842 [2024-11-25 10:42:33.082521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:38.842 [2024-11-25 10:42:33.082534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.842 [2024-11-25 10:42:33.082553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.842 [2024-11-25 10:42:33.082612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.842 [2024-11-25 10:42:33.082635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:38.842 [2024-11-25 10:42:33.082649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.842 [2024-11-25 10:42:33.082661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.842 [2024-11-25 10:42:33.082719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:38.842 [2024-11-25 10:42:33.082737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:38.842 [2024-11-25 10:42:33.082756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:38.842 [2024-11-25 10:42:33.082767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.842 [2024-11-25 10:42:33.082942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 370.441 ms, result 0 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:40.222 Remove shared memory files 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84453 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:36:40.222 ************************************ 00:36:40.222 END TEST ftl_upgrade_shutdown 00:36:40.222 ************************************ 00:36:40.222 00:36:40.222 real 1m32.348s 00:36:40.222 user 2m10.545s 00:36:40.222 sys 0m24.250s 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.222 10:42:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:40.481 10:42:34 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:36:40.481 10:42:34 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:40.481 10:42:34 ftl -- ftl/ftl.sh@14 -- # killprocess 76889 00:36:40.481 10:42:34 ftl -- common/autotest_common.sh@954 -- # '[' -z 76889 ']' 00:36:40.481 10:42:34 ftl -- common/autotest_common.sh@958 -- # kill -0 76889 00:36:40.481 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76889) - No such process 00:36:40.481 Process with pid 76889 is not found 00:36:40.481 10:42:34 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76889 is not found' 00:36:40.481 10:42:34 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:40.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.481 10:42:34 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84895 00:36:40.481 10:42:34 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:40.481 10:42:34 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84895 00:36:40.481 10:42:34 ftl -- common/autotest_common.sh@835 -- # '[' -z 84895 ']' 00:36:40.481 10:42:34 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.481 10:42:34 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:40.481 10:42:34 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.481 10:42:34 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:40.481 10:42:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:40.481 [2024-11-25 10:42:34.709693] Starting SPDK v25.01-pre git sha1 1e9cebf19 / DPDK 24.03.0 initialization... 00:36:40.481 [2024-11-25 10:42:34.709894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84895 ] 00:36:40.740 [2024-11-25 10:42:34.905492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.740 [2024-11-25 10:42:35.069491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.117 10:42:36 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:42.117 10:42:36 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:42.117 10:42:36 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:42.117 nvme0n1 00:36:42.117 10:42:36 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:42.117 10:42:36 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:42.117 10:42:36 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:42.684 10:42:36 ftl -- ftl/common.sh@28 -- # stores=f58de5d1-0229-4454-b29c-f90e12108c03 00:36:42.684 10:42:36 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:42.684 10:42:36 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f58de5d1-0229-4454-b29c-f90e12108c03 00:36:42.942 10:42:37 ftl -- ftl/ftl.sh@23 -- # killprocess 84895 00:36:42.942 10:42:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 84895 ']' 00:36:42.942 10:42:37 ftl -- common/autotest_common.sh@958 -- # kill -0 84895 00:36:42.942 10:42:37 ftl -- common/autotest_common.sh@959 -- # uname 00:36:42.942 10:42:37 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:42.942 10:42:37 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84895 00:36:42.942 killing process with pid 84895 00:36:42.942 10:42:37 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:42.942 10:42:37 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:42.943 10:42:37 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84895' 00:36:42.943 10:42:37 ftl -- common/autotest_common.sh@973 -- # kill 84895 00:36:42.943 10:42:37 ftl -- common/autotest_common.sh@978 -- # wait 84895 00:36:45.476 10:42:39 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:45.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:45.476 Waiting for block devices as requested 00:36:45.476 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:45.476 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:45.476 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:45.734 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:51.003 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:51.003 Remove shared memory files 00:36:51.003 10:42:44 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:51.003 10:42:44 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:51.003 10:42:44 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:51.003 10:42:44 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:51.003 10:42:44 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:51.003 10:42:44 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:51.003 10:42:44 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:51.003 ************************************ 00:36:51.003 END TEST ftl 00:36:51.003 ************************************ 00:36:51.003 00:36:51.003 real 12m12.419s 00:36:51.003 user 15m19.055s 00:36:51.003 sys 1m38.732s 00:36:51.003 10:42:44 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.003 10:42:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:51.003 10:42:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:51.003 10:42:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:51.003 10:42:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:51.003 10:42:44 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:51.003 10:42:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:51.003 10:42:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:51.003 10:42:44 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:51.003 10:42:44 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:51.003 10:42:44 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:51.003 10:42:44 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:51.003 10:42:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.003 10:42:44 -- common/autotest_common.sh@10 -- # set +x 00:36:51.003 10:42:44 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:51.003 10:42:44 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:51.003 10:42:44 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:51.003 10:42:44 -- common/autotest_common.sh@10 -- # set +x 00:36:52.379 INFO: APP EXITING 00:36:52.379 INFO: killing all VMs 00:36:52.379 INFO: killing vhost app 00:36:52.379 INFO: EXIT DONE 00:36:52.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:53.206 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:53.206 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:53.206 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:53.206 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:53.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:54.032 Cleaning 00:36:54.032 Removing: /var/run/dpdk/spdk0/config 00:36:54.032 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:54.032 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:54.032 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:54.032 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:54.032 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:54.032 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:54.032 Removing: /var/run/dpdk/spdk0 00:36:54.032 Removing: /var/run/dpdk/spdk_pid57646 00:36:54.032 Removing: /var/run/dpdk/spdk_pid57881 00:36:54.032 Removing: /var/run/dpdk/spdk_pid58105 00:36:54.032 Removing: /var/run/dpdk/spdk_pid58213 00:36:54.032 Removing: /var/run/dpdk/spdk_pid58265 00:36:54.032 Removing: /var/run/dpdk/spdk_pid58404 00:36:54.032 Removing: /var/run/dpdk/spdk_pid58422 00:36:54.032 Removing: /var/run/dpdk/spdk_pid58632 00:36:54.032 Removing: /var/run/dpdk/spdk_pid58738 00:36:54.032 Removing: /var/run/dpdk/spdk_pid58856 00:36:54.032 Removing: /var/run/dpdk/spdk_pid58978 00:36:54.032 Removing: /var/run/dpdk/spdk_pid59092 00:36:54.032 Removing: /var/run/dpdk/spdk_pid59131 00:36:54.032 Removing: /var/run/dpdk/spdk_pid59173 00:36:54.032 Removing: /var/run/dpdk/spdk_pid59249 00:36:54.032 Removing: /var/run/dpdk/spdk_pid59355 00:36:54.032 Removing: /var/run/dpdk/spdk_pid59832 00:36:54.032 Removing: /var/run/dpdk/spdk_pid59917 00:36:54.032 Removing: /var/run/dpdk/spdk_pid59994 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60010 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60165 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60186 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60342 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60369 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60433 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60462 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60526 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60550 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60751 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60793 00:36:54.032 Removing: /var/run/dpdk/spdk_pid60877 00:36:54.032 Removing: /var/run/dpdk/spdk_pid61071 00:36:54.032 Removing: /var/run/dpdk/spdk_pid61166 00:36:54.032 Removing: /var/run/dpdk/spdk_pid61219 00:36:54.032 Removing: /var/run/dpdk/spdk_pid61696 00:36:54.032 Removing: /var/run/dpdk/spdk_pid61807 00:36:54.032 Removing: /var/run/dpdk/spdk_pid61921 00:36:54.032 Removing: /var/run/dpdk/spdk_pid61980 00:36:54.032 Removing: /var/run/dpdk/spdk_pid62011 00:36:54.032 Removing: /var/run/dpdk/spdk_pid62095 00:36:54.032 Removing: /var/run/dpdk/spdk_pid62726 00:36:54.032 Removing: /var/run/dpdk/spdk_pid62776 00:36:54.032 Removing: /var/run/dpdk/spdk_pid63301 00:36:54.032 Removing: /var/run/dpdk/spdk_pid63410 00:36:54.032 Removing: /var/run/dpdk/spdk_pid63525 00:36:54.032 Removing: /var/run/dpdk/spdk_pid63583 00:36:54.032 Removing: /var/run/dpdk/spdk_pid63609 00:36:54.032 Removing: /var/run/dpdk/spdk_pid63640 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65551 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65700 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65704 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65716 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65765 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65770 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65782 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65827 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65831 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65843 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65893 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65897 00:36:54.032 Removing: /var/run/dpdk/spdk_pid65909 00:36:54.032 Removing: /var/run/dpdk/spdk_pid67316 00:36:54.032 Removing: /var/run/dpdk/spdk_pid67425 00:36:54.032 Removing: /var/run/dpdk/spdk_pid68842 00:36:54.032 Removing: /var/run/dpdk/spdk_pid70583 00:36:54.032 Removing: /var/run/dpdk/spdk_pid70668 00:36:54.032 Removing: /var/run/dpdk/spdk_pid70752 00:36:54.032 Removing: /var/run/dpdk/spdk_pid70864 00:36:54.032 Removing: /var/run/dpdk/spdk_pid70967 00:36:54.032 Removing: /var/run/dpdk/spdk_pid71067 00:36:54.033 Removing: /var/run/dpdk/spdk_pid71148 00:36:54.033 Removing: /var/run/dpdk/spdk_pid71229 00:36:54.033 Removing: /var/run/dpdk/spdk_pid71339 00:36:54.033 Removing: /var/run/dpdk/spdk_pid71437 00:36:54.033 Removing: /var/run/dpdk/spdk_pid71539 00:36:54.033 Removing: /var/run/dpdk/spdk_pid71619 00:36:54.033 Removing: /var/run/dpdk/spdk_pid71700 00:36:54.033 Removing: /var/run/dpdk/spdk_pid71810 00:36:54.033 Removing: /var/run/dpdk/spdk_pid71907 00:36:54.033 Removing: /var/run/dpdk/spdk_pid72010 00:36:54.033 Removing: /var/run/dpdk/spdk_pid72091 00:36:54.033 Removing: /var/run/dpdk/spdk_pid72172 00:36:54.033 Removing: /var/run/dpdk/spdk_pid72282 00:36:54.033 Removing: /var/run/dpdk/spdk_pid72378 00:36:54.033 Removing: /var/run/dpdk/spdk_pid72481 00:36:54.033 Removing: /var/run/dpdk/spdk_pid72565 00:36:54.292 Removing: /var/run/dpdk/spdk_pid72644 00:36:54.292 Removing: /var/run/dpdk/spdk_pid72720 00:36:54.292 Removing: /var/run/dpdk/spdk_pid72801 00:36:54.292 Removing: /var/run/dpdk/spdk_pid72910 00:36:54.292 Removing: /var/run/dpdk/spdk_pid73005 00:36:54.292 Removing: /var/run/dpdk/spdk_pid73111 00:36:54.292 Removing: /var/run/dpdk/spdk_pid73185 00:36:54.292 Removing: /var/run/dpdk/spdk_pid73265 00:36:54.292 Removing: /var/run/dpdk/spdk_pid73344 00:36:54.292 Removing: /var/run/dpdk/spdk_pid73420 00:36:54.292 Removing: /var/run/dpdk/spdk_pid73530 00:36:54.292 Removing: /var/run/dpdk/spdk_pid73625 00:36:54.292 Removing: /var/run/dpdk/spdk_pid73770 00:36:54.292 Removing: /var/run/dpdk/spdk_pid74060 00:36:54.292 Removing: /var/run/dpdk/spdk_pid74102 00:36:54.292 Removing: /var/run/dpdk/spdk_pid74589 00:36:54.292 Removing: /var/run/dpdk/spdk_pid74780 00:36:54.292 Removing: /var/run/dpdk/spdk_pid74890 00:36:54.292 Removing: /var/run/dpdk/spdk_pid75003 00:36:54.292 Removing: /var/run/dpdk/spdk_pid75057 00:36:54.292 Removing: /var/run/dpdk/spdk_pid75088 00:36:54.292 Removing: /var/run/dpdk/spdk_pid75378 00:36:54.292 Removing: /var/run/dpdk/spdk_pid75448 00:36:54.292 Removing: /var/run/dpdk/spdk_pid75531 00:36:54.292 Removing: /var/run/dpdk/spdk_pid75957 00:36:54.292 Removing: /var/run/dpdk/spdk_pid76098 00:36:54.292 Removing: /var/run/dpdk/spdk_pid76889 00:36:54.292 Removing: /var/run/dpdk/spdk_pid77038 00:36:54.292 Removing: /var/run/dpdk/spdk_pid77254 00:36:54.292 Removing: /var/run/dpdk/spdk_pid77362 00:36:54.292 Removing: /var/run/dpdk/spdk_pid77724 00:36:54.292 Removing: /var/run/dpdk/spdk_pid78009 00:36:54.292 Removing: /var/run/dpdk/spdk_pid78367 00:36:54.292 Removing: /var/run/dpdk/spdk_pid78573 00:36:54.292 Removing: /var/run/dpdk/spdk_pid78705 00:36:54.292 Removing: /var/run/dpdk/spdk_pid78776 00:36:54.292 Removing: /var/run/dpdk/spdk_pid78923 00:36:54.292 Removing: /var/run/dpdk/spdk_pid78959 00:36:54.292 Removing: /var/run/dpdk/spdk_pid79023 00:36:54.292 Removing: /var/run/dpdk/spdk_pid79232 00:36:54.292 Removing: /var/run/dpdk/spdk_pid79489 00:36:54.292 Removing: /var/run/dpdk/spdk_pid79932 00:36:54.292 Removing: /var/run/dpdk/spdk_pid80374 00:36:54.292 Removing: /var/run/dpdk/spdk_pid80789 00:36:54.292 Removing: /var/run/dpdk/spdk_pid81315 00:36:54.292 Removing: /var/run/dpdk/spdk_pid81464 00:36:54.292 Removing: /var/run/dpdk/spdk_pid81569 00:36:54.292 Removing: /var/run/dpdk/spdk_pid82290 00:36:54.292 Removing: /var/run/dpdk/spdk_pid82362 00:36:54.292 Removing: /var/run/dpdk/spdk_pid82844 00:36:54.292 Removing: /var/run/dpdk/spdk_pid83267 00:36:54.292 Removing: /var/run/dpdk/spdk_pid83834 00:36:54.292 Removing: /var/run/dpdk/spdk_pid83962 00:36:54.292 Removing: /var/run/dpdk/spdk_pid84015 00:36:54.292 Removing: /var/run/dpdk/spdk_pid84082 00:36:54.292 Removing: /var/run/dpdk/spdk_pid84147 00:36:54.292 Removing: /var/run/dpdk/spdk_pid84217 00:36:54.293 Removing: /var/run/dpdk/spdk_pid84453 00:36:54.293 Removing: /var/run/dpdk/spdk_pid84522 00:36:54.293 Removing: /var/run/dpdk/spdk_pid84589 00:36:54.293 Removing: /var/run/dpdk/spdk_pid84662 00:36:54.293 Removing: /var/run/dpdk/spdk_pid84697 00:36:54.293 Removing: /var/run/dpdk/spdk_pid84760 00:36:54.293 Removing: /var/run/dpdk/spdk_pid84895 00:36:54.293 Clean 00:36:54.293 10:42:48 -- common/autotest_common.sh@1453 -- # return 0 00:36:54.293 10:42:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:54.293 10:42:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:54.293 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:36:54.551 10:42:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:54.551 10:42:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:54.551 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:36:54.551 10:42:48 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:54.551 10:42:48 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:54.551 10:42:48 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:54.551 10:42:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:54.551 10:42:48 -- spdk/autotest.sh@398 -- # hostname 00:36:54.551 10:42:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:54.809 geninfo: WARNING: invalid characters removed from testname! 00:37:21.382 10:43:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:21.640 10:43:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:24.928 10:43:18 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:27.462 10:43:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:29.993 10:43:24 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:33.280 10:43:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:35.814 10:43:29 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:35.814 10:43:29 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:35.814 10:43:29 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:35.814 10:43:29 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:35.814 10:43:29 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:35.814 10:43:29 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:35.814 + [[ -n 5293 ]] 00:37:35.814 + sudo kill 5293 00:37:35.826 [Pipeline] } 00:37:35.845 [Pipeline] // timeout 00:37:35.851 [Pipeline] } 00:37:35.868 [Pipeline] // stage 00:37:35.874 [Pipeline] } 00:37:35.890 [Pipeline] // catchError 00:37:35.900 [Pipeline] stage 00:37:35.902 [Pipeline] { (Stop VM) 00:37:35.916 [Pipeline] sh 00:37:36.266 + vagrant halt 00:37:39.554 ==> default: Halting domain... 00:37:46.134 [Pipeline] sh 00:37:46.416 + vagrant destroy -f 00:37:49.704 ==> default: Removing domain... 00:37:49.977 [Pipeline] sh 00:37:50.259 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:50.268 [Pipeline] } 00:37:50.287 [Pipeline] // stage 00:37:50.294 [Pipeline] } 00:37:50.312 [Pipeline] // dir 00:37:50.319 [Pipeline] } 00:37:50.335 [Pipeline] // wrap 00:37:50.342 [Pipeline] } 00:37:50.356 [Pipeline] // catchError 00:37:50.366 [Pipeline] stage 00:37:50.369 [Pipeline] { (Epilogue) 00:37:50.382 [Pipeline] sh 00:37:50.664 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:57.278 [Pipeline] catchError 00:37:57.280 [Pipeline] { 00:37:57.294 [Pipeline] sh 00:37:57.575 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:57.834 Artifacts sizes are good 00:37:57.843 [Pipeline] } 00:37:57.858 [Pipeline] // catchError 00:37:57.871 [Pipeline] archiveArtifacts 00:37:57.878 Archiving artifacts 00:37:57.985 [Pipeline] cleanWs 00:37:57.996 [WS-CLEANUP] Deleting project workspace... 00:37:57.996 [WS-CLEANUP] Deferred wipeout is used... 00:37:58.001 [WS-CLEANUP] done 00:37:58.004 [Pipeline] } 00:37:58.019 [Pipeline] // stage 00:37:58.025 [Pipeline] } 00:37:58.040 [Pipeline] // node 00:37:58.046 [Pipeline] End of Pipeline 00:37:58.085 Finished: SUCCESS